Class GPUdb
- java.lang.Object
-
- com.gpudb.GPUdbBase
-
- com.gpudb.GPUdb
-
public class GPUdb extends GPUdbBase
Object that provides access to a specific GPUdb server.GPUdbinstances are thread safe and may be used from any number of threads simultaneously.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class com.gpudb.GPUdbBase
GPUdbBase.ClusterAddressInfo, GPUdbBase.FailbackOptions, GPUdbBase.GetRecordsJsonResponse, GPUdbBase.GPUdbExitException, GPUdbBase.GPUdbFailoverDisabledException, GPUdbBase.GPUdbHAUnavailableException, GPUdbBase.GPUdbHostnameRegexFailureException, GPUdbBase.GPUdbUnauthorizedAccessException, GPUdbBase.GPUdbVersion, GPUdbBase.HAFailoverOrder, GPUdbBase.HASynchronicityMode, GPUdbBase.InsertRecordsJsonRequest, GPUdbBase.JsonOptions, GPUdbBase.Options, GPUdbBase.SubmitException
-
-
Field Summary
-
Fields inherited from class com.gpudb.GPUdbBase
END_OF_SET, HEADER_AUTHORIZATION, HEADER_CONTENT_TYPE, HEADER_HA_SYNC_MODE, PROTECTED_HEADERS, SslErrorMessageFormat
-
-
Constructor Summary
Constructors Constructor Description GPUdb(String url)Creates aGPUdbinstance for the GPUdb server at the specified URL using default options.GPUdb(String url, GPUdbBase.Options options)Creates aGPUdbinstance for the GPUdb server at the specified URL using the specified options.GPUdb(URL url)Creates aGPUdbinstance for the GPUdb server at the specified URL using default options.GPUdb(URL url, GPUdbBase.Options options)Creates aGPUdbinstance for the GPUdb server at the specified URL using the specified options.GPUdb(List<URL> urls)Creates aGPUdbinstance for the GPUdb server with the specified URLs using default options.GPUdb(List<URL> urls, GPUdbBase.Options options)Creates aGPUdbinstance for the GPUdb server with the specified URLs using the specified options.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description AdminAddHostResponseadminAddHost(AdminAddHostRequest request)Adds a host to an existing cluster.AdminAddHostResponseadminAddHost(String hostAddress, Map<String,String> options)Adds a host to an existing cluster.AdminAddRanksResponseadminAddRanks(AdminAddRanksRequest request)Add one or more ranks to an existing Kinetica cluster.AdminAddRanksResponseadminAddRanks(List<String> hosts, List<Map<String,String>> configParams, Map<String,String> options)Add one or more ranks to an existing Kinetica cluster.AdminAlterHostResponseadminAlterHost(AdminAlterHostRequest request)Alter properties on an existing host in the cluster.AdminAlterHostResponseadminAlterHost(String host, Map<String,String> options)Alter properties on an existing host in the cluster.AdminAlterJobsResponseadminAlterJobs(AdminAlterJobsRequest request)Perform the requested action on a list of one or more job(s).AdminAlterJobsResponseadminAlterJobs(List<Long> jobIds, String action, Map<String,String> options)Perform the requested action on a list of one or more job(s).AdminBackupBeginResponseadminBackupBegin(AdminBackupBeginRequest request)Prepares the system for a backup by closing all open file handles after allowing current active jobs to complete.AdminBackupBeginResponseadminBackupBegin(Map<String,String> options)Prepares the system for a backup by closing all open file handles after allowing current active jobs to complete.AdminBackupEndResponseadminBackupEnd(AdminBackupEndRequest request)Restores the system to normal operating mode after a backup has completed, allowing any queries that were blocked to complete.AdminBackupEndResponseadminBackupEnd(Map<String,String> options)Restores the system to normal operating mode after a backup has completed, allowing any queries that were blocked to complete.AdminHaOfflineResponseadminHaOffline(boolean offline, Map<String,String> options)Pauses consumption of messages from other HA clusters to support data repair/recovery scenarios.AdminHaOfflineResponseadminHaOffline(AdminHaOfflineRequest request)Pauses consumption of messages from other HA clusters to support data repair/recovery scenarios.AdminHaRefreshResponseadminHaRefresh(AdminHaRefreshRequest request)Restarts the HA processing on the given cluster as a mechanism of accepting breaking HA conf changes.AdminHaRefreshResponseadminHaRefresh(Map<String,String> options)Restarts the HA processing on the given cluster as a mechanism of accepting breaking HA conf changes.AdminOfflineResponseadminOffline(boolean offline, Map<String,String> options)Take the system offline.AdminOfflineResponseadminOffline(AdminOfflineRequest request)Take the system offline.AdminRebalanceResponseadminRebalance(AdminRebalanceRequest request)Rebalance the data in the cluster so that all nodes contain an equal number of records approximately and/or rebalance the shards to be equally distributed (as much as possible) across all the ranks.AdminRebalanceResponseadminRebalance(Map<String,String> options)Rebalance the data in the cluster so that all nodes contain an equal number of records approximately and/or rebalance the shards to be equally distributed (as much as possible) across all the ranks.AdminRemoveHostResponseadminRemoveHost(AdminRemoveHostRequest request)Removes a host from an existing cluster.AdminRemoveHostResponseadminRemoveHost(String host, Map<String,String> options)Removes a host from an existing cluster.AdminRemoveRanksResponseadminRemoveRanks(AdminRemoveRanksRequest request)Remove one or more ranks from an existing Kinetica cluster.AdminRemoveRanksResponseadminRemoveRanks(List<String> ranks, Map<String,String> options)Remove one or more ranks from an existing Kinetica cluster.AdminRepairTableResponseadminRepairTable(AdminRepairTableRequest request)Manually repair a corrupted table.AdminRepairTableResponseadminRepairTable(List<String> tableNames, Map<String,String> tableTypes, Map<String,String> options)Manually repair a corrupted table.AdminSendAlertResponseadminSendAlert(AdminSendAlertRequest request)Sends a user generated alert to the monitoring system.AdminSendAlertResponseadminSendAlert(String message, String label, String logLevel, Map<String,String> options)Sends a user generated alert to the monitoring system.AdminShowAlertsResponseadminShowAlerts(int numAlerts, Map<String,String> options)Requests a list of the most recent alerts.AdminShowAlertsResponseadminShowAlerts(AdminShowAlertsRequest request)Requests a list of the most recent alerts.AdminShowClusterOperationsResponseadminShowClusterOperations(int historyIndex, Map<String,String> options)Requests the detailed status of the current operation (by default) or a prior cluster operation specified byhistoryIndex.AdminShowClusterOperationsResponseadminShowClusterOperations(AdminShowClusterOperationsRequest request)Requests the detailed status of the current operation (by default) or a prior cluster operation specified byhistoryIndex.AdminShowJobsResponseadminShowJobs(AdminShowJobsRequest request)Get a list of the current jobs in GPUdb.AdminShowJobsResponseadminShowJobs(Map<String,String> options)Get a list of the current jobs in GPUdb.AdminShowShardsResponseadminShowShards(AdminShowShardsRequest request)Show the mapping of shards to the corresponding rank and tom.AdminShowShardsResponseadminShowShards(Map<String,String> options)Show the mapping of shards to the corresponding rank and tom.AdminShutdownResponseadminShutdown(AdminShutdownRequest request)Exits the database server application.AdminShutdownResponseadminShutdown(String exitType, String authorization, Map<String,String> options)Exits the database server application.AdminSwitchoverResponseadminSwitchover(AdminSwitchoverRequest request)Manually switch over one or more processes to another host.AdminSwitchoverResponseadminSwitchover(List<String> processes, List<String> destinations, Map<String,String> options)Manually switch over one or more processes to another host.AdminVerifyDbResponseadminVerifyDb(AdminVerifyDbRequest request)Verify database is in a consistent state.AdminVerifyDbResponseadminVerifyDb(Map<String,String> options)Verify database is in a consistent state.AggregateConvexHullResponseaggregateConvexHull(AggregateConvexHullRequest request)Calculates and returns the convex hull for the values in a table specified bytableName.AggregateConvexHullResponseaggregateConvexHull(String tableName, String xColumnName, String yColumnName, Map<String,String> options)Calculates and returns the convex hull for the values in a table specified bytableName.AggregateGroupByResponseaggregateGroupBy(AggregateGroupByRequest request)Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination.AggregateGroupByResponseaggregateGroupBy(String tableName, List<String> columnNames, long offset, long limit, Map<String,String> options)Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination.RawAggregateGroupByResponseaggregateGroupByRaw(AggregateGroupByRequest request)Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination.AggregateHistogramResponseaggregateHistogram(AggregateHistogramRequest request)Performs a histogram calculation given a table, a column, and an interval function.AggregateHistogramResponseaggregateHistogram(String tableName, String columnName, double start, double end, double interval, Map<String,String> options)Performs a histogram calculation given a table, a column, and an interval function.AggregateKMeansResponseaggregateKMeans(AggregateKMeansRequest request)This endpoint runs the k-means algorithm - a heuristic algorithm that attempts to do k-means clustering.AggregateKMeansResponseaggregateKMeans(String tableName, List<String> columnNames, int k, double tolerance, Map<String,String> options)This endpoint runs the k-means algorithm - a heuristic algorithm that attempts to do k-means clustering.AggregateMinMaxResponseaggregateMinMax(AggregateMinMaxRequest request)Calculates and returns the minimum and maximum values of a particular column in a table.AggregateMinMaxResponseaggregateMinMax(String tableName, String columnName, Map<String,String> options)Calculates and returns the minimum and maximum values of a particular column in a table.AggregateMinMaxGeometryResponseaggregateMinMaxGeometry(AggregateMinMaxGeometryRequest request)Calculates and returns the minimum and maximum x- and y-coordinates of a particular geospatial geometry column in a table.AggregateMinMaxGeometryResponseaggregateMinMaxGeometry(String tableName, String columnName, Map<String,String> options)Calculates and returns the minimum and maximum x- and y-coordinates of a particular geospatial geometry column in a table.AggregateStatisticsResponseaggregateStatistics(AggregateStatisticsRequest request)Calculates the requested statistics of the given column(s) in a given table.AggregateStatisticsResponseaggregateStatistics(String tableName, String columnName, String stats, Map<String,String> options)Calculates the requested statistics of the given column(s) in a given table.AggregateStatisticsByRangeResponseaggregateStatisticsByRange(AggregateStatisticsByRangeRequest request)Divides the given set into bins and calculates statistics of the values of a value-column in each bin.AggregateStatisticsByRangeResponseaggregateStatisticsByRange(String tableName, String selectExpression, String columnName, String valueColumnName, String stats, double start, double end, double interval, Map<String,String> options)Divides the given set into bins and calculates statistics of the values of a value-column in each bin.AggregateUniqueResponseaggregateUnique(AggregateUniqueRequest request)Returns all the unique values from a particular column (specified bycolumnName) of a particular table or view (specified bytableName).AggregateUniqueResponseaggregateUnique(String tableName, String columnName, long offset, long limit, Map<String,String> options)Returns all the unique values from a particular column (specified bycolumnName) of a particular table or view (specified bytableName).RawAggregateUniqueResponseaggregateUniqueRaw(AggregateUniqueRequest request)Returns all the unique values from a particular column (specified bycolumnName) of a particular table or view (specified bytableName).AggregateUnpivotResponseaggregateUnpivot(AggregateUnpivotRequest request)Rotate the column values into rows values.AggregateUnpivotResponseaggregateUnpivot(String tableName, List<String> columnNames, String variableColumnName, String valueColumnName, List<String> pivotedColumns, Map<String,String> options)Rotate the column values into rows values.RawAggregateUnpivotResponseaggregateUnpivotRaw(AggregateUnpivotRequest request)Rotate the column values into rows values.AlterBackupResponsealterBackup(AlterBackupRequest request)AlterBackupResponsealterBackup(String backupName, String action, String value, String datasinkName, Map<String,String> options)AlterCredentialResponsealterCredential(AlterCredentialRequest request)Alter the properties of an existing credential.AlterCredentialResponsealterCredential(String credentialName, Map<String,String> credentialUpdatesMap, Map<String,String> options)Alter the properties of an existing credential.AlterDatasinkResponsealterDatasink(AlterDatasinkRequest request)Alters the properties of an existing data sinkAlterDatasinkResponsealterDatasink(String name, Map<String,String> datasinkUpdatesMap, Map<String,String> options)Alters the properties of an existing data sinkAlterDatasourceResponsealterDatasource(AlterDatasourceRequest request)Alters the properties of an existing data sourceAlterDatasourceResponsealterDatasource(String name, Map<String,String> datasourceUpdatesMap, Map<String,String> options)Alters the properties of an existing data sourceAlterDirectoryResponsealterDirectory(AlterDirectoryRequest request)Alters an existing directory in KiFS.AlterDirectoryResponsealterDirectory(String directoryName, Map<String,String> directoryUpdatesMap, Map<String,String> options)Alters an existing directory in KiFS.AlterEnvironmentResponsealterEnvironment(AlterEnvironmentRequest request)Alters an existing environment which can be referenced by a user-defined function (UDF).AlterEnvironmentResponsealterEnvironment(String environmentName, String action, String value, Map<String,String> options)Alters an existing environment which can be referenced by a user-defined function (UDF).AlterGraphResponsealterGraph(AlterGraphRequest request)AlterGraphResponsealterGraph(String graphName, String action, String actionArg, Map<String,String> options)AlterModelResponsealterModel(AlterModelRequest request)AlterModelResponsealterModel(String modelName, String action, String value, Map<String,String> options)AlterResourceGroupResponsealterResourceGroup(AlterResourceGroupRequest request)Alters the properties of an existing resource group to facilitate resource management.AlterResourceGroupResponsealterResourceGroup(String name, Map<String,Map<String,String>> tierAttributes, String ranking, String adjoiningResourceGroup, Map<String,String> options)Alters the properties of an existing resource group to facilitate resource management.AlterRoleResponsealterRole(AlterRoleRequest request)Alters a Role.AlterRoleResponsealterRole(String name, String action, String value, Map<String,String> options)Alters a Role.AlterSchemaResponsealterSchema(AlterSchemaRequest request)Used to change the name of a SQL-style schema, specified inschemaName.AlterSchemaResponsealterSchema(String schemaName, String action, String value, Map<String,String> options)Used to change the name of a SQL-style schema, specified inschemaName.AlterSystemPropertiesResponsealterSystemProperties(AlterSystemPropertiesRequest request)ThealterSystemPropertiesendpoint is primarily used to simplify the testing of the system and is not expected to be used during normal execution.AlterSystemPropertiesResponsealterSystemProperties(Map<String,String> propertyUpdatesMap, Map<String,String> options)ThealterSystemPropertiesendpoint is primarily used to simplify the testing of the system and is not expected to be used during normal execution.AlterTableResponsealterTable(AlterTableRequest request)Apply various modifications to a table or view.AlterTableResponsealterTable(String tableName, String action, String value, Map<String,String> options)Apply various modifications to a table or view.AlterTableColumnsResponsealterTableColumns(AlterTableColumnsRequest request)Apply various modifications to columns in a table, view.AlterTableColumnsResponsealterTableColumns(String tableName, List<Map<String,String>> columnAlterations, Map<String,String> options)Apply various modifications to columns in a table, view.AlterTableMetadataResponsealterTableMetadata(AlterTableMetadataRequest request)Updates (adds or changes) metadata for tables.AlterTableMetadataResponsealterTableMetadata(List<String> tableNames, Map<String,String> metadataMap, Map<String,String> options)Updates (adds or changes) metadata for tables.AlterTableMonitorResponsealterTableMonitor(AlterTableMonitorRequest request)Alters a table monitor previously created withcreateTableMonitor.AlterTableMonitorResponsealterTableMonitor(String topicId, Map<String,String> monitorUpdatesMap, Map<String,String> options)Alters a table monitor previously created withcreateTableMonitor.AlterTierResponsealterTier(AlterTierRequest request)Alters properties of an existing tier to facilitate resource management.AlterTierResponsealterTier(String name, Map<String,String> options)Alters properties of an existing tier to facilitate resource management.AlterUserResponsealterUser(AlterUserRequest request)Alters a user.AlterUserResponsealterUser(String name, String action, String value, Map<String,String> options)Alters a user.AlterVideoResponsealterVideo(AlterVideoRequest request)Alters a video.AlterVideoResponsealterVideo(String path, Map<String,String> options)Alters a video.AlterWalResponsealterWal(AlterWalRequest request)Alters table write-ahead log (WAL) settings.AlterWalResponsealterWal(List<String> tableNames, Map<String,String> options)Alters table write-ahead log (WAL) settings.AppendRecordsResponseappendRecords(AppendRecordsRequest request)Append (or insert) all records from a source table (specified bysourceTableName) to a particular target table (specified bytableName).AppendRecordsResponseappendRecords(String tableName, String sourceTableName, Map<String,String> fieldMap, Map<String,String> options)Append (or insert) all records from a source table (specified bysourceTableName) to a particular target table (specified bytableName).ClearStatisticsResponseclearStatistics(ClearStatisticsRequest request)Clears statistics (cardinality, mean value, etc.) for a column in a specified table.ClearStatisticsResponseclearStatistics(String tableName, String columnName, Map<String,String> options)Clears statistics (cardinality, mean value, etc.) for a column in a specified table.ClearTableResponseclearTable(ClearTableRequest request)Clears (drops) one or all tables in the database cluster.ClearTableResponseclearTable(String tableName, String authorization, Map<String,String> options)Clears (drops) one or all tables in the database cluster.ClearTableMonitorResponseclearTableMonitor(ClearTableMonitorRequest request)Deactivates a table monitor previously created withcreateTableMonitor.ClearTableMonitorResponseclearTableMonitor(String topicId, Map<String,String> options)Deactivates a table monitor previously created withcreateTableMonitor.ClearTablesResponseclearTables(ClearTablesRequest request)Clears (drops) tables in the database cluster.ClearTablesResponseclearTables(List<String> tableNames, Map<String,String> options)Clears (drops) tables in the database cluster.ClearTriggerResponseclearTrigger(ClearTriggerRequest request)Clears or cancels the trigger identified by the specified handle.ClearTriggerResponseclearTrigger(String triggerId, Map<String,String> options)Clears or cancels the trigger identified by the specified handle.CollectStatisticsResponsecollectStatistics(CollectStatisticsRequest request)Collect statistics for a column(s) in a specified table.CollectStatisticsResponsecollectStatistics(String tableName, List<String> columnNames, Map<String,String> options)Collect statistics for a column(s) in a specified table.CreateBackupResponsecreateBackup(CreateBackupRequest request)Creates a database backup, containing a snapshot of existing objects, at the remote file store accessible via the data sink specified bydatasinkName.CreateBackupResponsecreateBackup(String backupName, String backupType, Map<String,String> backupObjectsMap, String datasinkName, Map<String,String> options)CreateCatalogResponsecreateCatalog(CreateCatalogRequest request)Creates a catalog, which contains the location and connection information for a deltalake catalog that is external to the database.CreateCatalogResponsecreateCatalog(String name, String tableFormat, String location, String type, String credential, String datasource, Map<String,String> options)Creates a catalog, which contains the location and connection information for a deltalake catalog that is external to the database.CreateContainerRegistryResponsecreateContainerRegistry(CreateContainerRegistryRequest request)CreateContainerRegistryResponsecreateContainerRegistry(String registryName, String uri, String credential, Map<String,String> options)CreateCredentialResponsecreateCredential(CreateCredentialRequest request)Create a new credential.CreateCredentialResponsecreateCredential(String credentialName, String type, String identity, String secret, Map<String,String> options)Create a new credential.CreateDatasinkResponsecreateDatasink(CreateDatasinkRequest request)Creates a data sink, which contains the destination information for a data sink that is external to the database.CreateDatasinkResponsecreateDatasink(String name, String destination, Map<String,String> options)Creates a data sink, which contains the destination information for a data sink that is external to the database.CreateDatasourceResponsecreateDatasource(CreateDatasourceRequest request)Creates a data source, which contains the location and connection information for a data store that is external to the database.CreateDatasourceResponsecreateDatasource(String name, String location, String userName, String password, Map<String,String> options)Creates a data source, which contains the location and connection information for a data store that is external to the database.CreateDeltaTableResponsecreateDeltaTable(CreateDeltaTableRequest request)CreateDeltaTableResponsecreateDeltaTable(String deltaTableName, String tableName, Map<String,String> options)CreateDirectoryResponsecreateDirectory(CreateDirectoryRequest request)Creates a new directory in KiFS.CreateDirectoryResponsecreateDirectory(String directoryName, Map<String,String> options)Creates a new directory in KiFS.CreateEnvironmentResponsecreateEnvironment(CreateEnvironmentRequest request)Creates a new environment which can be used by user-defined functions (UDF).CreateEnvironmentResponsecreateEnvironment(String environmentName, Map<String,String> options)Creates a new environment which can be used by user-defined functions (UDF).CreateGraphResponsecreateGraph(CreateGraphRequest request)Creates a new graph network using given nodes, edges, weights, and restrictions.CreateGraphResponsecreateGraph(String graphName, boolean directedGraph, List<String> nodes, List<String> edges, List<String> weights, List<String> restrictions, Map<String,String> options)Creates a new graph network using given nodes, edges, weights, and restrictions.CreateJobResponsecreateJob(CreateJobRequest request)Create a job which will run asynchronously.CreateJobResponsecreateJob(String endpoint, String requestEncoding, ByteBuffer data, String dataStr, Map<String,String> options)Create a job which will run asynchronously.CreateJoinTableResponsecreateJoinTable(CreateJoinTableRequest request)Creates a table that is the result of a SQL JOIN.CreateJoinTableResponsecreateJoinTable(String joinTableName, List<String> tableNames, List<String> columnNames, List<String> expressions, Map<String,String> options)Creates a table that is the result of a SQL JOIN.CreateMaterializedViewResponsecreateMaterializedView(CreateMaterializedViewRequest request)Initiates the process of creating a materialized view, reserving the view's name to prevent other views or tables from being created with that name.CreateMaterializedViewResponsecreateMaterializedView(String tableName, Map<String,String> options)Initiates the process of creating a materialized view, reserving the view's name to prevent other views or tables from being created with that name.CreateProcResponsecreateProc(CreateProcRequest request)Creates an instance (proc) of the user-defined functions (UDF) specified by the given command, options, and files, and makes it available for execution.CreateProcResponsecreateProc(String procName, String executionMode, Map<String,ByteBuffer> files, String command, List<String> args, Map<String,String> options)Creates an instance (proc) of the user-defined functions (UDF) specified by the given command, options, and files, and makes it available for execution.CreateProjectionResponsecreateProjection(CreateProjectionRequest request)Creates a new projection of an existing table.CreateProjectionResponsecreateProjection(String tableName, String projectionName, List<String> columnNames, Map<String,String> options)Creates a new projection of an existing table.CreateResourceGroupResponsecreateResourceGroup(CreateResourceGroupRequest request)Creates a new resource group to facilitate resource management.CreateResourceGroupResponsecreateResourceGroup(String name, Map<String,Map<String,String>> tierAttributes, String ranking, String adjoiningResourceGroup, Map<String,String> options)Creates a new resource group to facilitate resource management.CreateRoleResponsecreateRole(CreateRoleRequest request)Creates a new role.CreateRoleResponsecreateRole(String name, Map<String,String> options)Creates a new role.CreateSchemaResponsecreateSchema(CreateSchemaRequest request)Creates a SQL-style schema.CreateSchemaResponsecreateSchema(String schemaName, Map<String,String> options)Creates a SQL-style schema.CreateStateTableResponsecreateStateTable(CreateStateTableRequest request)CreateStateTableResponsecreateStateTable(String tableName, String inputTableName, String initTableName, Map<String,String> options)CreateTableResponsecreateTable(CreateTableRequest request)Creates a new table with the given type (definition of columns).CreateTableResponsecreateTable(String tableName, String typeId, Map<String,String> options)Creates a new table with the given type (definition of columns).CreateTableExternalResponsecreateTableExternal(CreateTableExternalRequest request)Creates a new external table, which is a local database object whose source data is located externally to the database.CreateTableExternalResponsecreateTableExternal(String tableName, List<String> filepaths, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options)Creates a new external table, which is a local database object whose source data is located externally to the database.CreateTableMonitorResponsecreateTableMonitor(CreateTableMonitorRequest request)Creates a monitor that watches for a single table modification event type (insert, update, or delete) on a particular table (identified bytableName) and forwards event notifications to subscribers via ZMQ.CreateTableMonitorResponsecreateTableMonitor(String tableName, Map<String,String> options)Creates a monitor that watches for a single table modification event type (insert, update, or delete) on a particular table (identified bytableName) and forwards event notifications to subscribers via ZMQ.CreateTriggerByAreaResponsecreateTriggerByArea(CreateTriggerByAreaRequest request)Sets up an area trigger mechanism for two column_names for one or more tables.CreateTriggerByAreaResponsecreateTriggerByArea(String requestId, List<String> tableNames, String xColumnName, List<Double> xVector, String yColumnName, List<Double> yVector, Map<String,String> options)Sets up an area trigger mechanism for two column_names for one or more tables.CreateTriggerByRangeResponsecreateTriggerByRange(CreateTriggerByRangeRequest request)Sets up a simple range trigger for a column_name for one or more tables.CreateTriggerByRangeResponsecreateTriggerByRange(String requestId, List<String> tableNames, String columnName, double min, double max, Map<String,String> options)Sets up a simple range trigger for a column_name for one or more tables.CreateTypeResponsecreateType(CreateTypeRequest request)Creates a new type describing the columns of a table.CreateTypeResponsecreateType(String typeDefinition, String label, Map<String,List<String>> properties, Map<String,String> options)Creates a new type describing the columns of a table.CreateUnionResponsecreateUnion(CreateUnionRequest request)Merges data from one or more tables with comparable data types into a new table.CreateUnionResponsecreateUnion(String tableName, List<String> tableNames, List<List<String>> inputColumnNames, List<String> outputColumnNames, Map<String,String> options)Merges data from one or more tables with comparable data types into a new table.CreateUserExternalResponsecreateUserExternal(CreateUserExternalRequest request)Creates a new external user (a user whose credentials are managed by an external LDAP).CreateUserExternalResponsecreateUserExternal(String name, Map<String,String> options)Creates a new external user (a user whose credentials are managed by an external LDAP).CreateUserInternalResponsecreateUserInternal(CreateUserInternalRequest request)Creates a new internal user (a user whose credentials are managed by the database system).CreateUserInternalResponsecreateUserInternal(String name, String password, Map<String,String> options)Creates a new internal user (a user whose credentials are managed by the database system).CreateVideoResponsecreateVideo(CreateVideoRequest request)Creates a job to generate a sequence of raster images that visualize data over a specified time.CreateVideoResponsecreateVideo(String attribute, String begin, double durationSeconds, String end, double framesPerSecond, String style, String path, String styleParameters, Map<String,String> options)Creates a job to generate a sequence of raster images that visualize data over a specified time.DeleteDirectoryResponsedeleteDirectory(DeleteDirectoryRequest request)Deletes a directory from KiFS.DeleteDirectoryResponsedeleteDirectory(String directoryName, Map<String,String> options)Deletes a directory from KiFS.DeleteFilesResponsedeleteFiles(DeleteFilesRequest request)Deletes one or more files from KiFS.DeleteFilesResponsedeleteFiles(List<String> fileNames, Map<String,String> options)Deletes one or more files from KiFS.DeleteGraphResponsedeleteGraph(DeleteGraphRequest request)Deletes an existing graph from the graph server and/or persist.DeleteGraphResponsedeleteGraph(String graphName, Map<String,String> options)Deletes an existing graph from the graph server and/or persist.DeleteProcResponsedeleteProc(DeleteProcRequest request)Deletes a proc.DeleteProcResponsedeleteProc(String procName, Map<String,String> options)Deletes a proc.DeleteRecordsResponsedeleteRecords(DeleteRecordsRequest request)Deletes record(s) matching the provided criteria from the given table.DeleteRecordsResponsedeleteRecords(String tableName, List<String> expressions, Map<String,String> options)Deletes record(s) matching the provided criteria from the given table.DeleteResourceGroupResponsedeleteResourceGroup(DeleteResourceGroupRequest request)Deletes a resource group.DeleteResourceGroupResponsedeleteResourceGroup(String name, Map<String,String> options)Deletes a resource group.DeleteRoleResponsedeleteRole(DeleteRoleRequest request)Deletes an existing role.DeleteRoleResponsedeleteRole(String name, Map<String,String> options)Deletes an existing role.DeleteUserResponsedeleteUser(DeleteUserRequest request)Deletes an existing user.DeleteUserResponsedeleteUser(String name, Map<String,String> options)Deletes an existing user.DownloadFilesResponsedownloadFiles(DownloadFilesRequest request)Downloads one or more files from KiFS.DownloadFilesResponsedownloadFiles(List<String> fileNames, List<Long> readOffsets, List<Long> readLengths, Map<String,String> options)Downloads one or more files from KiFS.DropBackupResponsedropBackup(DropBackupRequest request)Deletes one or more existing database backups and contained snapshots, accessible via the data sink specified bydatasinkName.DropBackupResponsedropBackup(String backupName, String datasinkName, Map<String,String> options)DropCatalogResponsedropCatalog(DropCatalogRequest request)Drops an existing catalog.DropCatalogResponsedropCatalog(String name, Map<String,String> options)Drops an existing catalog.DropContainerRegistryResponsedropContainerRegistry(DropContainerRegistryRequest request)DropContainerRegistryResponsedropContainerRegistry(String registryName, Map<String,String> options)DropCredentialResponsedropCredential(DropCredentialRequest request)Drop an existing credential.DropCredentialResponsedropCredential(String credentialName, Map<String,String> options)Drop an existing credential.DropDatasinkResponsedropDatasink(DropDatasinkRequest request)Drops an existing data sink.DropDatasinkResponsedropDatasink(String name, Map<String,String> options)Drops an existing data sink.DropDatasourceResponsedropDatasource(DropDatasourceRequest request)Drops an existing data source.DropDatasourceResponsedropDatasource(String name, Map<String,String> options)Drops an existing data source.DropEnvironmentResponsedropEnvironment(DropEnvironmentRequest request)Drop an existing user-defined function (UDF) environment.DropEnvironmentResponsedropEnvironment(String environmentName, Map<String,String> options)Drop an existing user-defined function (UDF) environment.DropModelResponsedropModel(DropModelRequest request)DropModelResponsedropModel(String modelName, Map<String,String> options)DropSchemaResponsedropSchema(DropSchemaRequest request)Drops an existing SQL-style schema, specified inschemaName.DropSchemaResponsedropSchema(String schemaName, Map<String,String> options)Drops an existing SQL-style schema, specified inschemaName.EvaluateModelResponseevaluateModel(EvaluateModelRequest request)EvaluateModelResponseevaluateModel(String modelName, int replicas, String deploymentMode, String sourceTable, String destinationTable, Map<String,String> options)ExecuteProcResponseexecuteProc(ExecuteProcRequest request)Executes a proc.ExecuteProcResponseexecuteProc(String procName, Map<String,String> params, Map<String,ByteBuffer> binParams, List<String> inputTableNames, Map<String,List<String>> inputColumnNames, List<String> outputTableNames, Map<String,String> options)Executes a proc.ExecuteSqlResponseexecuteSql(ExecuteSqlRequest request)Execute a SQL statement (query, DML, or DDL).ExecuteSqlResponseexecuteSql(String statement, long offset, long limit, String requestSchemaStr, List<ByteBuffer> data, Map<String,String> options)Execute a SQL statement (query, DML, or DDL).RawExecuteSqlResponseexecuteSqlRaw(ExecuteSqlRequest request)Execute a SQL statement (query, DML, or DDL).ExportQueryMetricsResponseexportQueryMetrics(ExportQueryMetricsRequest request)Export query metrics to a given destination.ExportQueryMetricsResponseexportQueryMetrics(Map<String,String> options)Export query metrics to a given destination.ExportRecordsToFilesResponseexportRecordsToFiles(ExportRecordsToFilesRequest request)Export records from a table to files.ExportRecordsToFilesResponseexportRecordsToFiles(String tableName, String filepath, Map<String,String> options)Export records from a table to files.ExportRecordsToTableResponseexportRecordsToTable(ExportRecordsToTableRequest request)Exports records from source table to the specified target table in an external databaseExportRecordsToTableResponseexportRecordsToTable(String tableName, String remoteQuery, Map<String,String> options)Exports records from source table to the specified target table in an external databaseFilterResponsefilter(FilterRequest request)Filters data based on the specified expression.FilterResponsefilter(String tableName, String viewName, String expression, Map<String,String> options)Filters data based on the specified expression.FilterByAreaResponsefilterByArea(FilterByAreaRequest request)Calculates which objects from a table are within a named area of interest (NAI/polygon).FilterByAreaResponsefilterByArea(String tableName, String viewName, String xColumnName, List<Double> xVector, String yColumnName, List<Double> yVector, Map<String,String> options)Calculates which objects from a table are within a named area of interest (NAI/polygon).FilterByAreaGeometryResponsefilterByAreaGeometry(FilterByAreaGeometryRequest request)Calculates which geospatial geometry objects from a table intersect a named area of interest (NAI/polygon).FilterByAreaGeometryResponsefilterByAreaGeometry(String tableName, String viewName, String columnName, List<Double> xVector, List<Double> yVector, Map<String,String> options)Calculates which geospatial geometry objects from a table intersect a named area of interest (NAI/polygon).FilterByBoxResponsefilterByBox(FilterByBoxRequest request)Calculates how many objects within the given table lie in a rectangular box.FilterByBoxResponsefilterByBox(String tableName, String viewName, String xColumnName, double minX, double maxX, String yColumnName, double minY, double maxY, Map<String,String> options)Calculates how many objects within the given table lie in a rectangular box.FilterByBoxGeometryResponsefilterByBoxGeometry(FilterByBoxGeometryRequest request)Calculates which geospatial geometry objects from a table intersect a rectangular box.FilterByBoxGeometryResponsefilterByBoxGeometry(String tableName, String viewName, String columnName, double minX, double maxX, double minY, double maxY, Map<String,String> options)Calculates which geospatial geometry objects from a table intersect a rectangular box.FilterByGeometryResponsefilterByGeometry(FilterByGeometryRequest request)Applies a geometry filter against a geospatial geometry column in a given table or view.FilterByGeometryResponsefilterByGeometry(String tableName, String viewName, String columnName, String inputWkt, String operation, Map<String,String> options)Applies a geometry filter against a geospatial geometry column in a given table or view.FilterByListResponsefilterByList(FilterByListRequest request)Calculates which records from a table have values in the given list for the corresponding column.FilterByListResponsefilterByList(String tableName, String viewName, Map<String,List<String>> columnValuesMap, Map<String,String> options)Calculates which records from a table have values in the given list for the corresponding column.FilterByRadiusResponsefilterByRadius(FilterByRadiusRequest request)Calculates which objects from a table lie within a circle with the given radius and center point (i.e. circular NAI).FilterByRadiusResponsefilterByRadius(String tableName, String viewName, String xColumnName, double xCenter, String yColumnName, double yCenter, double radius, Map<String,String> options)Calculates which objects from a table lie within a circle with the given radius and center point (i.e. circular NAI).FilterByRadiusGeometryResponsefilterByRadiusGeometry(FilterByRadiusGeometryRequest request)Calculates which geospatial geometry objects from a table intersect a circle with the given radius and center point (i.e. circular NAI).FilterByRadiusGeometryResponsefilterByRadiusGeometry(String tableName, String viewName, String columnName, double xCenter, double yCenter, double radius, Map<String,String> options)Calculates which geospatial geometry objects from a table intersect a circle with the given radius and center point (i.e. circular NAI).FilterByRangeResponsefilterByRange(FilterByRangeRequest request)Calculates which objects from a table have a column that is within the given bounds.FilterByRangeResponsefilterByRange(String tableName, String viewName, String columnName, double lowerBound, double upperBound, Map<String,String> options)Calculates which objects from a table have a column that is within the given bounds.FilterBySeriesResponsefilterBySeries(FilterBySeriesRequest request)Filters objects matching all points of the given track (works only on track type data).FilterBySeriesResponsefilterBySeries(String tableName, String viewName, String trackId, List<String> targetTrackIds, Map<String,String> options)Filters objects matching all points of the given track (works only on track type data).FilterByStringResponsefilterByString(FilterByStringRequest request)Calculates which objects from a table or view match a string expression for the given string columns.FilterByStringResponsefilterByString(String tableName, String viewName, String expression, String mode, List<String> columnNames, Map<String,String> options)Calculates which objects from a table or view match a string expression for the given string columns.FilterByTableResponsefilterByTable(FilterByTableRequest request)Filters objects in one table based on objects in another table.FilterByTableResponsefilterByTable(String tableName, String viewName, String columnName, String sourceTableName, String sourceTableColumnName, Map<String,String> options)Filters objects in one table based on objects in another table.FilterByValueResponsefilterByValue(FilterByValueRequest request)Calculates which objects from a table has a particular value for a particular column.FilterByValueResponsefilterByValue(String tableName, String viewName, boolean isString, double value, String valueStr, String columnName, Map<String,String> options)Calculates which objects from a table has a particular value for a particular column.GetJobResponsegetJob(long jobId, Map<String,String> options)Get the status and result of asynchronously running job.GetJobResponsegetJob(GetJobRequest request)Get the status and result of asynchronously running job.<TResponse>
GetRecordsResponse<TResponse>getRecords(GetRecordsRequest request)Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column.<TResponse>
GetRecordsResponse<TResponse>getRecords(Object typeDescriptor, GetRecordsRequest request)Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column.<TResponse>
GetRecordsResponse<TResponse>getRecords(Object typeDescriptor, String tableName, long offset, long limit, Map<String,String> options)Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column.<TResponse>
GetRecordsResponse<TResponse>getRecords(String tableName, long offset, long limit, Map<String,String> options)Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column.GetRecordsByColumnResponsegetRecordsByColumn(GetRecordsByColumnRequest request)For a given table, retrieves the values from the requested column(s).GetRecordsByColumnResponsegetRecordsByColumn(String tableName, List<String> columnNames, long offset, long limit, Map<String,String> options)For a given table, retrieves the values from the requested column(s).RawGetRecordsByColumnResponsegetRecordsByColumnRaw(GetRecordsByColumnRequest request)For a given table, retrieves the values from the requested column(s).<TResponse>
GetRecordsBySeriesResponse<TResponse>getRecordsBySeries(GetRecordsBySeriesRequest request)Retrieves the complete series/track records from the givenworldTableNamebased on the partial track information contained in thetableName.<TResponse>
GetRecordsBySeriesResponse<TResponse>getRecordsBySeries(Object typeDescriptor, GetRecordsBySeriesRequest request)Retrieves the complete series/track records from the givenworldTableNamebased on the partial track information contained in thetableName.<TResponse>
GetRecordsBySeriesResponse<TResponse>getRecordsBySeries(Object typeDescriptor, String tableName, String worldTableName, int offset, int limit, Map<String,String> options)Retrieves the complete series/track records from the givenworldTableNamebased on the partial track information contained in thetableName.<TResponse>
GetRecordsBySeriesResponse<TResponse>getRecordsBySeries(String tableName, String worldTableName, int offset, int limit, Map<String,String> options)Retrieves the complete series/track records from the givenworldTableNamebased on the partial track information contained in thetableName.RawGetRecordsBySeriesResponsegetRecordsBySeriesRaw(GetRecordsBySeriesRequest request)Retrieves the complete series/track records from the givenworldTableNamebased on the partial track information contained in thetableName.<TResponse>
GetRecordsFromCollectionResponse<TResponse>getRecordsFromCollection(GetRecordsFromCollectionRequest request)Retrieves records from a collection.<TResponse>
GetRecordsFromCollectionResponse<TResponse>getRecordsFromCollection(Object typeDescriptor, GetRecordsFromCollectionRequest request)Retrieves records from a collection.<TResponse>
GetRecordsFromCollectionResponse<TResponse>getRecordsFromCollection(Object typeDescriptor, String tableName, long offset, long limit, Map<String,String> options)Retrieves records from a collection.<TResponse>
GetRecordsFromCollectionResponse<TResponse>getRecordsFromCollection(String tableName, long offset, long limit, Map<String,String> options)Retrieves records from a collection.RawGetRecordsFromCollectionResponsegetRecordsFromCollectionRaw(GetRecordsFromCollectionRequest request)Retrieves records from a collection.RawGetRecordsResponsegetRecordsRaw(GetRecordsRequest request)Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column.GetVectortileResponsegetVectortile(GetVectortileRequest request)GetVectortileResponsegetVectortile(List<String> tableNames, List<String> columnNames, Map<String,List<String>> layers, int tileX, int tileY, int zoom, Map<String,String> options)GrantPermissionResponsegrantPermission(GrantPermissionRequest request)Grant user or role the specified permission on the specified object.GrantPermissionResponsegrantPermission(String principal, String object, String objectType, String permission, Map<String,String> options)Grant user or role the specified permission on the specified object.GrantPermissionCredentialResponsegrantPermissionCredential(GrantPermissionCredentialRequest request)Grants a credential-level permission to a user or role.GrantPermissionCredentialResponsegrantPermissionCredential(String name, String permission, String credentialName, Map<String,String> options)Grants a credential-level permission to a user or role.GrantPermissionDatasourceResponsegrantPermissionDatasource(GrantPermissionDatasourceRequest request)Grants a data source permission to a user or role.GrantPermissionDatasourceResponsegrantPermissionDatasource(String name, String permission, String datasourceName, Map<String,String> options)Grants a data source permission to a user or role.GrantPermissionDirectoryResponsegrantPermissionDirectory(GrantPermissionDirectoryRequest request)Grants a KiFS directory-level permission to a user or role.GrantPermissionDirectoryResponsegrantPermissionDirectory(String name, String permission, String directoryName, Map<String,String> options)Grants a KiFS directory-level permission to a user or role.GrantPermissionProcResponsegrantPermissionProc(GrantPermissionProcRequest request)Grants a proc-level permission to a user or role.GrantPermissionProcResponsegrantPermissionProc(String name, String permission, String procName, Map<String,String> options)Grants a proc-level permission to a user or role.GrantPermissionSystemResponsegrantPermissionSystem(GrantPermissionSystemRequest request)Grants a system-level permission to a user or role.GrantPermissionSystemResponsegrantPermissionSystem(String name, String permission, Map<String,String> options)Grants a system-level permission to a user or role.GrantPermissionTableResponsegrantPermissionTable(GrantPermissionTableRequest request)Grants a table-level permission to a user or role.GrantPermissionTableResponsegrantPermissionTable(String name, String permission, String tableName, String filterExpression, Map<String,String> options)Grants a table-level permission to a user or role.GrantRoleResponsegrantRole(GrantRoleRequest request)Grants membership in a role to a user or role.GrantRoleResponsegrantRole(String role, String member, Map<String,String> options)Grants membership in a role to a user or role.HasPermissionResponsehasPermission(HasPermissionRequest request)Checks if the specified user has the specified permission on the specified object.HasPermissionResponsehasPermission(String principal, String object, String objectType, String permission, Map<String,String> options)Checks if the specified user has the specified permission on the specified object.HasProcResponsehasProc(HasProcRequest request)Checks the existence of a proc with the given name.HasProcResponsehasProc(String procName, Map<String,String> options)Checks the existence of a proc with the given name.HasRoleResponsehasRole(HasRoleRequest request)Checks if the specified user has the specified role.HasRoleResponsehasRole(String principal, String role, Map<String,String> options)Checks if the specified user has the specified role.HasSchemaResponsehasSchema(HasSchemaRequest request)Checks for the existence of a schema with the given name.HasSchemaResponsehasSchema(String schemaName, Map<String,String> options)Checks for the existence of a schema with the given name.HasTableResponsehasTable(HasTableRequest request)Checks for the existence of a table with the given name.HasTableResponsehasTable(String tableName, Map<String,String> options)Checks for the existence of a table with the given name.HasTypeResponsehasType(HasTypeRequest request)Check for the existence of a type.HasTypeResponsehasType(String typeId, Map<String,String> options)Check for the existence of a type.ImportModelResponseimportModel(ImportModelRequest request)ImportModelResponseimportModel(String modelName, String registryName, String container, String runFunction, String modelType, Map<String,String> options)<TRequest> InsertRecordsResponseinsertRecords(InsertRecordsRequest<TRequest> request)Adds multiple records to the specified table.<TRequest> InsertRecordsResponseinsertRecords(TypeObjectMap<TRequest> typeObjectMap, InsertRecordsRequest<TRequest> request)Adds multiple records to the specified table.<TRequest> InsertRecordsResponseinsertRecords(TypeObjectMap<TRequest> typeObjectMap, String tableName, List<TRequest> data, Map<String,String> options)Adds multiple records to the specified table.<TRequest> InsertRecordsResponseinsertRecords(String tableName, List<TRequest> data, Map<String,String> options)Adds multiple records to the specified table.InsertRecordsFromFilesResponseinsertRecordsFromFiles(InsertRecordsFromFilesRequest request)Reads from one or more files and inserts the data into a new or existing table.InsertRecordsFromFilesResponseinsertRecordsFromFiles(String tableName, List<String> filepaths, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options)Reads from one or more files and inserts the data into a new or existing table.InsertRecordsFromPayloadResponseinsertRecordsFromPayload(InsertRecordsFromPayloadRequest request)Reads from the given text-based or binary payload and inserts the data into a new or existing table.InsertRecordsFromPayloadResponseinsertRecordsFromPayload(String tableName, String dataText, ByteBuffer dataBytes, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options)Reads from the given text-based or binary payload and inserts the data into a new or existing table.InsertRecordsFromQueryResponseinsertRecordsFromQuery(InsertRecordsFromQueryRequest request)Computes remote query result and inserts the result data into a new or existing tableInsertRecordsFromQueryResponseinsertRecordsFromQuery(String tableName, String remoteQuery, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options)Computes remote query result and inserts the result data into a new or existing tableInsertRecordsRandomResponseinsertRecordsRandom(InsertRecordsRandomRequest request)Generates a specified number of random records and adds them to the given table.InsertRecordsRandomResponseinsertRecordsRandom(String tableName, long count, Map<String,Map<String,Double>> options)Generates a specified number of random records and adds them to the given table.InsertRecordsResponseinsertRecordsRaw(RawInsertRecordsRequest request)Adds multiple records to the specified table.InsertSymbolResponseinsertSymbol(InsertSymbolRequest request)Adds a symbol or icon (i.e. an image) to represent data points when data is rendered visually.InsertSymbolResponseinsertSymbol(String symbolId, String symbolFormat, ByteBuffer symbolData, Map<String,String> options)Adds a symbol or icon (i.e. an image) to represent data points when data is rendered visually.KillProcResponsekillProc(KillProcRequest request)Kills a running proc instance.KillProcResponsekillProc(String runId, Map<String,String> options)Kills a running proc instance.ListGraphResponselistGraph(ListGraphRequest request)ListGraphResponselistGraph(String graphName, Map<String,String> options)LockTableResponselockTable(LockTableRequest request)Manages global access to a table's data.LockTableResponselockTable(String tableName, String lockType, Map<String,String> options)Manages global access to a table's data.MatchGraphResponsematchGraph(MatchGraphRequest request)Matches a directed route implied by a given set of latitude/longitude points to an existing underlying road network graph using a given solution type.MatchGraphResponsematchGraph(String graphName, List<String> samplePoints, String solveMethod, String solutionTable, Map<String,String> options)Matches a directed route implied by a given set of latitude/longitude points to an existing underlying road network graph using a given solution type.ModifyGraphResponsemodifyGraph(ModifyGraphRequest request)Update an existing graph network using given nodes, edges, weights, restrictions, and options.ModifyGraphResponsemodifyGraph(String graphName, List<String> nodes, List<String> edges, List<String> weights, List<String> restrictions, Map<String,String> options)Update an existing graph network using given nodes, edges, weights, restrictions, and options.QueryGraphResponsequeryGraph(QueryGraphRequest request)Employs a topological query on a graph generated a-priori bycreateGraphand returns a list of adjacent edge(s) or node(s), also known as an adjacency list, depending on what's been provided to the endpoint; providing edges will return nodes and providing nodes will return edges.QueryGraphResponsequeryGraph(String graphName, List<String> queries, List<String> restrictions, String adjacencyTable, int rings, Map<String,String> options)Employs a topological query on a graph generated a-priori bycreateGraphand returns a list of adjacent edge(s) or node(s), also known as an adjacency list, depending on what's been provided to the endpoint; providing edges will return nodes and providing nodes will return edges.RepartitionGraphResponserepartitionGraph(RepartitionGraphRequest request)Rebalances an existing partitioned graph.RepartitionGraphResponserepartitionGraph(String graphName, Map<String,String> options)Rebalances an existing partitioned graph.ReserveResourceResponsereserveResource(ReserveResourceRequest request)ReserveResourceResponsereserveResource(String component, String name, String action, long bytesRequested, long ownerId, Map<String,String> options)RestoreBackupResponserestoreBackup(RestoreBackupRequest request)RestoreBackupResponserestoreBackup(String backupName, Map<String,String> restoreObjectsMap, String datasourceName, Map<String,String> options)RevokePermissionResponserevokePermission(RevokePermissionRequest request)Revoke user or role the specified permission on the specified object.RevokePermissionResponserevokePermission(String principal, String object, String objectType, String permission, Map<String,String> options)Revoke user or role the specified permission on the specified object.RevokePermissionCredentialResponserevokePermissionCredential(RevokePermissionCredentialRequest request)Revokes a credential-level permission from a user or role.RevokePermissionCredentialResponserevokePermissionCredential(String name, String permission, String credentialName, Map<String,String> options)Revokes a credential-level permission from a user or role.RevokePermissionDatasourceResponserevokePermissionDatasource(RevokePermissionDatasourceRequest request)Revokes a data source permission from a user or role.RevokePermissionDatasourceResponserevokePermissionDatasource(String name, String permission, String datasourceName, Map<String,String> options)Revokes a data source permission from a user or role.RevokePermissionDirectoryResponserevokePermissionDirectory(RevokePermissionDirectoryRequest request)Revokes a KiFS directory-level permission from a user or role.RevokePermissionDirectoryResponserevokePermissionDirectory(String name, String permission, String directoryName, Map<String,String> options)Revokes a KiFS directory-level permission from a user or role.RevokePermissionProcResponserevokePermissionProc(RevokePermissionProcRequest request)Revokes a proc-level permission from a user or role.RevokePermissionProcResponserevokePermissionProc(String name, String permission, String procName, Map<String,String> options)Revokes a proc-level permission from a user or role.RevokePermissionSystemResponserevokePermissionSystem(RevokePermissionSystemRequest request)Revokes a system-level permission from a user or role.RevokePermissionSystemResponserevokePermissionSystem(String name, String permission, Map<String,String> options)Revokes a system-level permission from a user or role.RevokePermissionTableResponserevokePermissionTable(RevokePermissionTableRequest request)Revokes a table-level permission from a user or role.RevokePermissionTableResponserevokePermissionTable(String name, String permission, String tableName, Map<String,String> options)Revokes a table-level permission from a user or role.RevokeRoleResponserevokeRole(RevokeRoleRequest request)Revokes membership in a role from a user or role.RevokeRoleResponserevokeRole(String role, String member, Map<String,String> options)Revokes membership in a role from a user or role.ShowBackupResponseshowBackup(ShowBackupRequest request)Shows information about one or more backups accessible via the data source specified bydatasourceName.ShowBackupResponseshowBackup(String backupName, String datasourceName, Map<String,String> options)Shows information about one or more backups accessible via the data source specified bydatasourceName.ShowContainerRegistryResponseshowContainerRegistry(ShowContainerRegistryRequest request)ShowContainerRegistryResponseshowContainerRegistry(String registryName, Map<String,String> options)ShowCredentialResponseshowCredential(ShowCredentialRequest request)Shows information about a specified credential or all credentials.ShowCredentialResponseshowCredential(String credentialName, Map<String,String> options)Shows information about a specified credential or all credentials.ShowDatasinkResponseshowDatasink(ShowDatasinkRequest request)Shows information about a specified data sink or all data sinks.ShowDatasinkResponseshowDatasink(String name, Map<String,String> options)Shows information about a specified data sink or all data sinks.ShowDatasourceResponseshowDatasource(ShowDatasourceRequest request)Shows information about a specified data source or all data sources.ShowDatasourceResponseshowDatasource(String name, Map<String,String> options)Shows information about a specified data source or all data sources.ShowDirectoriesResponseshowDirectories(ShowDirectoriesRequest request)Shows information about directories in KiFS.ShowDirectoriesResponseshowDirectories(String directoryName, Map<String,String> options)Shows information about directories in KiFS.ShowEnvironmentResponseshowEnvironment(ShowEnvironmentRequest request)Shows information about a specified user-defined function (UDF) environment or all environments.ShowEnvironmentResponseshowEnvironment(String environmentName, Map<String,String> options)Shows information about a specified user-defined function (UDF) environment or all environments.ShowFilesResponseshowFiles(ShowFilesRequest request)Shows information about files in KiFS.ShowFilesResponseshowFiles(List<String> paths, Map<String,String> options)Shows information about files in KiFS.ShowFunctionsResponseshowFunctions(ShowFunctionsRequest request)ShowFunctionsResponseshowFunctions(List<String> names, Map<String,String> options)ShowGraphResponseshowGraph(ShowGraphRequest request)Shows information and characteristics of graphs that exist on the graph server.ShowGraphResponseshowGraph(String graphName, Map<String,String> options)Shows information and characteristics of graphs that exist on the graph server.ShowGraphGrammarResponseshowGraphGrammar(ShowGraphGrammarRequest request)ShowGraphGrammarResponseshowGraphGrammar(Map<String,String> options)ShowModelResponseshowModel(ShowModelRequest request)ShowModelResponseshowModel(List<String> modelNames, Map<String,String> options)ShowProcResponseshowProc(ShowProcRequest request)Shows information about a proc.ShowProcResponseshowProc(String procName, Map<String,String> options)Shows information about a proc.ShowProcStatusResponseshowProcStatus(ShowProcStatusRequest request)Shows the statuses of running or completed proc instances.ShowProcStatusResponseshowProcStatus(String runId, Map<String,String> options)Shows the statuses of running or completed proc instances.ShowResourceGroupsResponseshowResourceGroups(ShowResourceGroupsRequest request)Requests resource group properties.ShowResourceGroupsResponseshowResourceGroups(List<String> names, Map<String,String> options)Requests resource group properties.ShowResourceObjectsResponseshowResourceObjects(ShowResourceObjectsRequest request)Returns information about the internal sub-components (tiered objects) which use resources of the system.ShowResourceObjectsResponseshowResourceObjects(Map<String,String> options)Returns information about the internal sub-components (tiered objects) which use resources of the system.ShowResourceStatisticsResponseshowResourceStatistics(ShowResourceStatisticsRequest request)Requests various statistics for storage/memory tiers and resource groups.ShowResourceStatisticsResponseshowResourceStatistics(Map<String,String> options)Requests various statistics for storage/memory tiers and resource groups.ShowSchemaResponseshowSchema(ShowSchemaRequest request)Retrieves information about a schema (or all schemas), as specified inschemaName.ShowSchemaResponseshowSchema(String schemaName, Map<String,String> options)Retrieves information about a schema (or all schemas), as specified inschemaName.ShowSecurityResponseshowSecurity(ShowSecurityRequest request)Shows security information relating to users and/or roles.ShowSecurityResponseshowSecurity(List<String> names, Map<String,String> options)Shows security information relating to users and/or roles.ShowSqlProcResponseshowSqlProc(ShowSqlProcRequest request)Shows information about SQL procedures, including the full definition of each requested procedure.ShowSqlProcResponseshowSqlProc(String procedureName, Map<String,String> options)Shows information about SQL procedures, including the full definition of each requested procedure.ShowStatisticsResponseshowStatistics(ShowStatisticsRequest request)Retrieves the collected column statistics for the specified table(s).ShowStatisticsResponseshowStatistics(List<String> tableNames, Map<String,String> options)Retrieves the collected column statistics for the specified table(s).ShowSystemPropertiesResponseshowSystemProperties(ShowSystemPropertiesRequest request)Returns server configuration and version related information to the caller.ShowSystemPropertiesResponseshowSystemProperties(Map<String,String> options)Returns server configuration and version related information to the caller.ShowSystemStatusResponseshowSystemStatus(ShowSystemStatusRequest request)Provides server configuration and health related status to the caller.ShowSystemStatusResponseshowSystemStatus(Map<String,String> options)Provides server configuration and health related status to the caller.ShowSystemTimingResponseshowSystemTiming(ShowSystemTimingRequest request)Returns the last 100 database requests along with the request timing and internal job ID.ShowSystemTimingResponseshowSystemTiming(Map<String,String> options)Returns the last 100 database requests along with the request timing and internal job ID.ShowTableResponseshowTable(ShowTableRequest request)Retrieves detailed information about a table, view, or schema, specified intableName.ShowTableResponseshowTable(String tableName, Map<String,String> options)Retrieves detailed information about a table, view, or schema, specified intableName.ShowTableMetadataResponseshowTableMetadata(ShowTableMetadataRequest request)Retrieves the user provided metadata for the specified tables.ShowTableMetadataResponseshowTableMetadata(List<String> tableNames, Map<String,String> options)Retrieves the user provided metadata for the specified tables.ShowTableMonitorsResponseshowTableMonitors(ShowTableMonitorsRequest request)Show table monitors and their properties.ShowTableMonitorsResponseshowTableMonitors(List<String> monitorIds, Map<String,String> options)Show table monitors and their properties.ShowTablesByTypeResponseshowTablesByType(ShowTablesByTypeRequest request)Gets names of the tables whose type matches the given criteria.ShowTablesByTypeResponseshowTablesByType(String typeId, String label, Map<String,String> options)Gets names of the tables whose type matches the given criteria.ShowTriggersResponseshowTriggers(ShowTriggersRequest request)Retrieves information regarding the specified triggers or all existing triggers currently active.ShowTriggersResponseshowTriggers(List<String> triggerIds, Map<String,String> options)Retrieves information regarding the specified triggers or all existing triggers currently active.ShowTypesResponseshowTypes(ShowTypesRequest request)Retrieves information for the specified data type ID or type label.ShowTypesResponseshowTypes(String typeId, String label, Map<String,String> options)Retrieves information for the specified data type ID or type label.ShowVideoResponseshowVideo(ShowVideoRequest request)Retrieves information about rendered videos.ShowVideoResponseshowVideo(List<String> paths, Map<String,String> options)Retrieves information about rendered videos.ShowWalResponseshowWal(ShowWalRequest request)Requests table write-ahead log (WAL) properties.ShowWalResponseshowWal(List<String> tableNames, Map<String,String> options)Requests table write-ahead log (WAL) properties.SolveGraphResponsesolveGraph(SolveGraphRequest request)Solves an existing graph for a type of problem (e.g., shortest path, page rank, traveling salesman, etc.) using source nodes, destination nodes, and additional, optional weights and restrictions.SolveGraphResponsesolveGraph(String graphName, List<String> weightsOnEdges, List<String> restrictions, String solverType, List<String> sourceNodes, List<String> destinationNodes, String solutionTable, Map<String,String> options)Solves an existing graph for a type of problem (e.g., shortest path, page rank, traveling salesman, etc.) using source nodes, destination nodes, and additional, optional weights and restrictions.<TRequest> UpdateRecordsResponseupdateRecords(UpdateRecordsRequest<TRequest> request)Runs multiple predicate-based updates in a single call.<TRequest> UpdateRecordsResponseupdateRecords(TypeObjectMap<TRequest> typeObjectMap, UpdateRecordsRequest<TRequest> request)Runs multiple predicate-based updates in a single call.<TRequest> UpdateRecordsResponseupdateRecords(TypeObjectMap<TRequest> typeObjectMap, String tableName, List<String> expressions, List<Map<String,String>> newValuesMaps, List<TRequest> data, Map<String,String> options)Runs multiple predicate-based updates in a single call.<TRequest> UpdateRecordsResponseupdateRecords(String tableName, List<String> expressions, List<Map<String,String>> newValuesMaps, List<TRequest> data, Map<String,String> options)Runs multiple predicate-based updates in a single call.UpdateRecordsResponseupdateRecordsRaw(RawUpdateRecordsRequest request)Runs multiple predicate-based updates in a single call.UploadFilesResponseuploadFiles(UploadFilesRequest request)Uploads one or more files to KiFS.UploadFilesResponseuploadFiles(List<String> fileNames, List<ByteBuffer> fileData, Map<String,String> options)Uploads one or more files to KiFS.UploadFilesFromurlResponseuploadFilesFromurl(UploadFilesFromurlRequest request)Uploads one or more files to KiFS.UploadFilesFromurlResponseuploadFilesFromurl(List<String> fileNames, List<String> urls, Map<String,String> options)Uploads one or more files to KiFS.VisualizeGetFeatureInfoResponsevisualizeGetFeatureInfo(VisualizeGetFeatureInfoRequest request)VisualizeGetFeatureInfoResponsevisualizeGetFeatureInfo(List<String> tableNames, List<String> xColumnNames, List<String> yColumnNames, List<String> geometryColumnNames, List<List<String>> queryColumnNames, String projection, double minX, double maxX, double minY, double maxY, int width, int height, int x, int y, int radius, long limit, String encoding, Map<String,String> options)VisualizeImageResponsevisualizeImage(VisualizeImageRequest request)VisualizeImageResponsevisualizeImage(List<String> tableNames, List<String> worldTableNames, String xColumnName, String yColumnName, String symbolColumnName, String geometryColumnName, List<List<String>> trackIds, double minX, double maxX, double minY, double maxY, int width, int height, String projection, long bgColor, Map<String,List<String>> styleOptions, Map<String,String> options)VisualizeImageChartResponsevisualizeImageChart(VisualizeImageChartRequest request)Scatter plot is the only plot type currently supported.VisualizeImageChartResponsevisualizeImageChart(String tableName, List<String> xColumnNames, List<String> yColumnNames, double minX, double maxX, double minY, double maxY, int width, int height, String bgColor, Map<String,List<String>> styleOptions, Map<String,String> options)Scatter plot is the only plot type currently supported.VisualizeImageClassbreakResponsevisualizeImageClassbreak(VisualizeImageClassbreakRequest request)VisualizeImageClassbreakResponsevisualizeImageClassbreak(List<String> tableNames, List<String> worldTableNames, String xColumnName, String yColumnName, String symbolColumnName, String geometryColumnName, List<List<String>> trackIds, String cbAttr, List<String> cbVals, String cbPointcolorAttr, List<String> cbPointcolorVals, String cbPointalphaAttr, List<String> cbPointalphaVals, String cbPointsizeAttr, List<String> cbPointsizeVals, String cbPointshapeAttr, List<String> cbPointshapeVals, double minX, double maxX, double minY, double maxY, int width, int height, String projection, long bgColor, Map<String,List<String>> styleOptions, Map<String,String> options, List<Integer> cbTransparencyVec)VisualizeImageContourResponsevisualizeImageContour(VisualizeImageContourRequest request)VisualizeImageContourResponsevisualizeImageContour(List<String> tableNames, String xColumnName, String yColumnName, String valueColumnName, double minX, double maxX, double minY, double maxY, int width, int height, String projection, Map<String,String> styleOptions, Map<String,String> options)VisualizeImageHeatmapResponsevisualizeImageHeatmap(VisualizeImageHeatmapRequest request)VisualizeImageHeatmapResponsevisualizeImageHeatmap(List<String> tableNames, String xColumnName, String yColumnName, String valueColumnName, String geometryColumnName, double minX, double maxX, double minY, double maxY, int width, int height, String projection, Map<String,String> styleOptions, Map<String,String> options)VisualizeImageLabelsResponsevisualizeImageLabels(VisualizeImageLabelsRequest request)VisualizeImageLabelsResponsevisualizeImageLabels(String tableName, String xColumnName, String yColumnName, String xOffset, String yOffset, String textString, String font, String textColor, String textAngle, String textScale, String drawBox, String drawLeader, String lineWidth, String lineColor, String fillColor, String leaderXColumnName, String leaderYColumnName, String filter, double minX, double maxX, double minY, double maxY, int width, int height, String projection, Map<String,String> options)VisualizeIsochroneResponsevisualizeIsochrone(VisualizeIsochroneRequest request)Generate an image containing isolines for travel results using an existing graph.VisualizeIsochroneResponsevisualizeIsochrone(String graphName, String sourceNode, double maxSolutionRadius, List<String> weightsOnEdges, List<String> restrictions, int numLevels, boolean generateImage, String levelsTable, Map<String,String> styleOptions, Map<String,String> solveOptions, Map<String,String> contourOptions, Map<String,String> options)Generate an image containing isolines for travel results using an existing graph.-
Methods inherited from class com.gpudb.GPUdbBase
addHttpHeader, addKnownType, addKnownType, addKnownTypeFromTable, addKnownTypeFromTable, addKnownTypeObjectMap, createAuthorizationHeader, createHASyncModeHeader, decode, decode, decode, decodeMultiple, decodeMultiple, encode, encode, execute, execute, execute, finalize, getApiVersion, getAuthorizationFromHttpHeaders, getBypassSslCertCheck, getClusterInfo, getExecutor, getFailoverURLs, getHARingInfo, getHARingSize, getHASyncMode, getHmURL, getHmURLs, getHostAddresses, getHttpHeaders, getNumClusterSwitches, getPassword, getPrimaryHostname, getPrimaryUrl, getRecordsJson, getRecordsJson, getRecordsJson, getRecordsJson, getRecordsJson, getRecordsJson, getServerVersion, getSystemProperties, getThreadCount, getTimeout, getTypeDescriptor, getTypeObjectMap, getURL, getURLs, getUsername, getUseSnappy, incrementNumClusterSwitches, initializeHttpConnection, initializeHttpConnection, initializeHttpPostRequest, initializeHttpPostRequest, insertRecordsFromJson, insertRecordsFromJson, insertRecordsFromJson, isAutoDiscoveryEnabled, isKineticaRunning, list, options, ping, ping, ping, query, query, query, removeHttpHeader, removeProtectedHttpHeaders, selectNextCluster, setHASyncMode, setHostManagerPort, setTypeDescriptorIfMissing, submitRequest, submitRequest, submitRequest, submitRequest, submitRequest, submitRequest, submitRequest, submitRequest, submitRequestRaw, submitRequestRaw, submitRequestRaw, submitRequestRaw, submitRequestRaw, submitRequestRaw, submitRequestToHM, submitRequestToHM, switchURL
-
-
-
-
Constructor Detail
-
GPUdb
public GPUdb(String url) throws GPUdbException
Creates aGPUdbinstance for the GPUdb server at the specified URL using default options. Note that these options cannot be changed subsequently; to use different options, a newGPUdbinstance must be created.- Parameters:
url- The URL of the GPUdb server. Can be a comma-separated string containing multiple full URLs, or a single URL. For example 'http://172.42.40.1:9191,,http://172.42.40.2:9191'. If a single URL is given, the given URL will be used as the primary URL.- Throws:
GPUdbException- if an error occurs during creation.
-
GPUdb
public GPUdb(URL url) throws GPUdbException
Creates aGPUdbinstance for the GPUdb server at the specified URL using default options. Note that these options cannot be changed subsequently; to use different options, a newGPUdbinstance must be created.- Parameters:
url- The URL of the GPUdb server. The given URL will be used as the primary URL.- Throws:
GPUdbException- if an error occurs during creation.
-
GPUdb
public GPUdb(List<URL> urls) throws GPUdbException
Creates aGPUdbinstance for the GPUdb server with the specified URLs using default options. At any given time, one URL (initially selected at random from the list) will be active and used for all GPUdb calls, but in the event of failure, the other URLs will be tried in order, and if a working one is found it will become the new active URL. Note that the default options cannot be changed subsequently; to use different options, a newGPUdbinstance must be created.- Parameters:
urls- The URLs of the GPUdb server. If a single URL is given, it will be used as the primary URL.- Throws:
GPUdbException- if an error occurs during creation.
-
GPUdb
public GPUdb(String url, GPUdbBase.Options options) throws GPUdbException
Creates aGPUdbinstance for the GPUdb server at the specified URL using the specified options. Note that these options cannot be changed subsequently; to use different options, a newGPUdbinstance must be created.- Parameters:
url- The URL of the GPUdb server. Can be a comma-separated string containing multiple full URLs, or a single URL. For example 'http://172.42.40.1:9191,,http://172.42.40.2:9191'. If a single URL is given, and no primary URL is specified via the options, the given URL will be used as the primary URL.options- The options, e.g. primary cluster URL, to use.- Throws:
GPUdbException- if an error occurs during creation.- See Also:
GPUdbBase.Options
-
GPUdb
public GPUdb(URL url, GPUdbBase.Options options) throws GPUdbException
Creates aGPUdbinstance for the GPUdb server at the specified URL using the specified options. Note that these options cannot be changed subsequently; to use different options, a newGPUdbinstance must be created.- Parameters:
url- The URL of the GPUdb server. If no primary URL is specified via the options, the given URL will be used as the primary URL.options- The options, e.g. primary cluster URL, to use.- Throws:
GPUdbException- if an error occurs during creation.- See Also:
GPUdbBase.Options
-
GPUdb
public GPUdb(List<URL> urls, GPUdbBase.Options options) throws GPUdbException
Creates aGPUdbinstance for the GPUdb server with the specified URLs using the specified options. At any given time, one URL (initially selected at random from the list) will be active and used for all GPUdb calls, but in the event of failure, the other URLs will be tried in order, and if a working one is found it will become the new active URL. Note that the specified options cannot be changed subsequently; to use different options, a newGPUdbinstance must be created.- Parameters:
urls- The URLs of the GPUdb server. If a single URL is given, and no primary URL is specified via the options, the given URL will be used as the primary URL.options- The options, e.g. primary cluster URL, to use.- Throws:
GPUdbException- if an error occurs during creation.- See Also:
GPUdbBase.Options
-
-
Method Detail
-
adminAddHost
public AdminAddHostResponse adminAddHost(AdminAddHostRequest request) throws GPUdbException
Adds a host to an existing cluster.Note: This method should be used for on-premise deployments only.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminAddHost
public AdminAddHostResponse adminAddHost(String hostAddress, Map<String,String> options) throws GPUdbException
Adds a host to an existing cluster.Note: This method should be used for on-premise deployments only.
- Parameters:
hostAddress- IP address of the host that will be added to the cluster. This host must have installed the same version of Kinetica as the cluster to which it is being added.options- Optional parameters.DRY_RUN: If set toTRUE, only validation checks will be performed. No host is added. Supported values: The default value isFALSE.ACCEPTS_FAILOVER: If set toTRUE, the host will accept processes (ranks, graph server, etc.) in the event of a failover on another node in the cluster. Supported values: The default value isFALSE.PUBLIC_ADDRESS: The publicly-accessible IP address for the host being added, typically specified for clients using multi-head operations. This setting is required if any other host(s) in the cluster specify a public address.HOST_MANAGER_PUBLIC_URL: The publicly-accessible full path URL to the host manager on the host being added, e.g., 'http://172.123.45.67:9300'. The default host manager port can be found in the list of ports used by Kinetica.RAM_LIMIT: The desired RAM limit for the host being added, i.e. the sum of RAM usage for all processes on the host will not be able to exceed this value. Supported units: K (thousand), KB (kilobytes), M (million), MB (megabytes), G (billion), GB (gigabytes); if no unit is provided, the value is assumed to be in bytes. For example, ifRAM_LIMITis set to 10M, the resulting RAM limit is 10 million bytes. SetRAM_LIMITto -1 to have no RAM limit.GPUS: Comma-delimited list of GPU indices (starting at 1) that are eligible for running worker processes. If left blank, all GPUs on the host being added will be eligible.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminAddRanks
public AdminAddRanksResponse adminAddRanks(AdminAddRanksRequest request) throws GPUdbException
Add one or more ranks to an existing Kinetica cluster. The new ranks will not contain any data initially (other than replicated tables) and will not be assigned any shards. To rebalance data and shards across the cluster, useadminRebalance.The database must be offline for this operation, see
adminOfflineFor example, if attempting to add three new ranks (two ranks on host 172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with additional configuration parameters:
*
hostswould be an array including 172.123.45.67 in the first two indices (signifying two ranks being added to host 172.123.45.67) and 172.123.45.68 in the last index (signifying one rank being added to host 172.123.45.67)*
configParamswould be an array of maps, with each map corresponding to the ranks being added inhosts. The key of each map would be the configuration parameter name and the value would be the parameter's value, e.g. '{"rank.gpu":"1"}'This endpoint's processing includes copying all replicated table data to the new rank(s) and therefore could take a long time. The API call may time out if run directly. It is recommended to run this endpoint asynchronously via
createJob.Note: This method should be used for on-premise deployments only.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminAddRanks
public AdminAddRanksResponse adminAddRanks(List<String> hosts, List<Map<String,String>> configParams, Map<String,String> options) throws GPUdbException
Add one or more ranks to an existing Kinetica cluster. The new ranks will not contain any data initially (other than replicated tables) and will not be assigned any shards. To rebalance data and shards across the cluster, useadminRebalance.The database must be offline for this operation, see
adminOfflineFor example, if attempting to add three new ranks (two ranks on host 172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with additional configuration parameters:
*
hostswould be an array including 172.123.45.67 in the first two indices (signifying two ranks being added to host 172.123.45.67) and 172.123.45.68 in the last index (signifying one rank being added to host 172.123.45.67)*
configParamswould be an array of maps, with each map corresponding to the ranks being added inhosts. The key of each map would be the configuration parameter name and the value would be the parameter's value, e.g. '{"rank.gpu":"1"}'This endpoint's processing includes copying all replicated table data to the new rank(s) and therefore could take a long time. The API call may time out if run directly. It is recommended to run this endpoint asynchronously via
createJob.Note: This method should be used for on-premise deployments only.
- Parameters:
hosts- Array of host IP addresses (matching a hostN.address from the gpudb.conf file), or host identifiers (e.g. 'host0' from the gpudb.conf file), on which to add ranks to the cluster. The hosts must already be in the cluster. If needed beforehand, to add a new host to the cluster useadminAddHost. Include the same entry as many times as there are ranks to add to the cluster, e.g., if two ranks on host 172.123.45.67 should be added,hostscould look like '["172.123.45.67", "172.123.45.67"]'. All ranks will be added simultaneously, i.e. they're not added in the order of this array. Each entry in this array corresponds to the entry at the same index in theconfigParams.configParams- Array of maps containing configuration parameters to apply to the new ranks found inhosts. For example, '{"rank.gpu":"2", "tier.ram.rank.limit":"10000000000"}'. Currently, the available parameters are rank-specific parameters in the Network, Hardware, Text Search, and RAM Tiered Storage sections in the gpudb.conf file, with the key exception of the 'rankN.host' settings in the Network section that will be determined byhostsinstead. Though many of these configuration parameters typically are affixed with 'rankN' in the gpudb.conf file (where N is the rank number), the 'N' should be omitted inconfigParamsas the new rank number(s) are not allocated until the ranks have been added to the cluster. Each entry in this array corresponds to the entry at the same index in thehosts. This array must either be completely empty or have the same number of elements as thehosts. An emptyconfigParamsarray will result in the new ranks being set with default parameters.options- Optional parameters.DRY_RUN: IfTRUE, only validation checks will be performed. No ranks are added. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminAlterHost
public AdminAlterHostResponse adminAlterHost(AdminAlterHostRequest request) throws GPUdbException
Alter properties on an existing host in the cluster. Currently, the only property that can be altered is a hosts ability to accept failover processes.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminAlterHost
public AdminAlterHostResponse adminAlterHost(String host, Map<String,String> options) throws GPUdbException
Alter properties on an existing host in the cluster. Currently, the only property that can be altered is a hosts ability to accept failover processes.- Parameters:
host- Identifies the host this applies to. Can be the host address, or formatted as 'hostN' where N is the host number as specified in gpudb.confoptions- Optional parameters.ACCEPTS_FAILOVER: If set toTRUE, the host will accept processes (ranks, graph server, etc.) in the event of a failover on another node in the cluster. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminAlterJobs
public AdminAlterJobsResponse adminAlterJobs(AdminAlterJobsRequest request) throws GPUdbException
Perform the requested action on a list of one or more job(s). Based on the type of job and the current state of execution, the action may not be successfully executed. The final result of the attempted actions for each specified job is returned in the status array of the response. See Job Manager for more information.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminAlterJobs
public AdminAlterJobsResponse adminAlterJobs(List<Long> jobIds, String action, Map<String,String> options) throws GPUdbException
Perform the requested action on a list of one or more job(s). Based on the type of job and the current state of execution, the action may not be successfully executed. The final result of the attempted actions for each specified job is returned in the status array of the response. See Job Manager for more information.- Parameters:
jobIds- Jobs to be modified.action- Action to be performed on the jobs specified by job_ids. Supported values:options- Optional parameters.JOB_TAG: Job tag returned in call to create the job
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminBackupBegin
public AdminBackupBeginResponse adminBackupBegin(AdminBackupBeginRequest request) throws GPUdbException
Prepares the system for a backup by closing all open file handles after allowing current active jobs to complete. When the database is in backup mode, queries that result in a disk write operation will be blocked until backup mode has been completed by usingadminBackupEnd.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminBackupBegin
public AdminBackupBeginResponse adminBackupBegin(Map<String,String> options) throws GPUdbException
Prepares the system for a backup by closing all open file handles after allowing current active jobs to complete. When the database is in backup mode, queries that result in a disk write operation will be blocked until backup mode has been completed by usingadminBackupEnd.- Parameters:
options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminBackupEnd
public AdminBackupEndResponse adminBackupEnd(AdminBackupEndRequest request) throws GPUdbException
Restores the system to normal operating mode after a backup has completed, allowing any queries that were blocked to complete.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminBackupEnd
public AdminBackupEndResponse adminBackupEnd(Map<String,String> options) throws GPUdbException
Restores the system to normal operating mode after a backup has completed, allowing any queries that were blocked to complete.- Parameters:
options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminHaOffline
public AdminHaOfflineResponse adminHaOffline(AdminHaOfflineRequest request) throws GPUdbException
Pauses consumption of messages from other HA clusters to support data repair/recovery scenarios. In-flight queries may fail to replicate to other clusters in the ring when going offline.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminHaOffline
public AdminHaOfflineResponse adminHaOffline(boolean offline, Map<String,String> options) throws GPUdbException
Pauses consumption of messages from other HA clusters to support data repair/recovery scenarios. In-flight queries may fail to replicate to other clusters in the ring when going offline.- Parameters:
offline- Set to true if desired state is offline. Supported values:truefalse
options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminHaRefresh
public AdminHaRefreshResponse adminHaRefresh(AdminHaRefreshRequest request) throws GPUdbException
Restarts the HA processing on the given cluster as a mechanism of accepting breaking HA conf changes. Additionally the cluster is put into read-only while HA is restarting.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminHaRefresh
public AdminHaRefreshResponse adminHaRefresh(Map<String,String> options) throws GPUdbException
Restarts the HA processing on the given cluster as a mechanism of accepting breaking HA conf changes. Additionally the cluster is put into read-only while HA is restarting.- Parameters:
options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminOffline
public AdminOfflineResponse adminOffline(AdminOfflineRequest request) throws GPUdbException
Take the system offline. When the system is offline, no user operations can be performed with the exception of a system shutdown.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminOffline
public AdminOfflineResponse adminOffline(boolean offline, Map<String,String> options) throws GPUdbException
Take the system offline. When the system is offline, no user operations can be performed with the exception of a system shutdown.- Parameters:
offline- Set to true if desired state is offline. Supported values:truefalse
options- Optional parameters.FLUSH_TO_DISK: Flush to disk when going offline. Supported values:
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminRebalance
public AdminRebalanceResponse adminRebalance(AdminRebalanceRequest request) throws GPUdbException
Rebalance the data in the cluster so that all nodes contain an equal number of records approximately and/or rebalance the shards to be equally distributed (as much as possible) across all the ranks.The database must be offline for this operation, see
adminOffline* If
adminRebalanceis invoked after a change is made to the cluster, e.g., a host was added or removed, sharded data will be evenly redistributed across the cluster by number of shards per rank while unsharded data will be redistributed across the cluster by data size per rank* If
adminRebalanceis invoked at some point when unsharded data (a.k.a. randomly-sharded) in the cluster is unevenly distributed over time, sharded data will not move while unsharded data will be redistributed across the cluster by data size per rankNOTE: Replicated data will not move as a result of this call
This endpoint's processing time depends on the amount of data in the system, thus the API call may time out if run directly. It is recommended to run this endpoint asynchronously via
createJob.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminRebalance
public AdminRebalanceResponse adminRebalance(Map<String,String> options) throws GPUdbException
Rebalance the data in the cluster so that all nodes contain an equal number of records approximately and/or rebalance the shards to be equally distributed (as much as possible) across all the ranks.The database must be offline for this operation, see
adminOffline* If
adminRebalanceis invoked after a change is made to the cluster, e.g., a host was added or removed, sharded data will be evenly redistributed across the cluster by number of shards per rank while unsharded data will be redistributed across the cluster by data size per rank* If
adminRebalanceis invoked at some point when unsharded data (a.k.a. randomly-sharded) in the cluster is unevenly distributed over time, sharded data will not move while unsharded data will be redistributed across the cluster by data size per rankNOTE: Replicated data will not move as a result of this call
This endpoint's processing time depends on the amount of data in the system, thus the API call may time out if run directly. It is recommended to run this endpoint asynchronously via
createJob.- Parameters:
options- Optional parameters.REBALANCE_SHARDED_DATA: IfTRUE, sharded data will be rebalanced approximately equally across the cluster. Note that for clusters with large amounts of sharded data, this data transfer could be time consuming and result in delayed query responses. Supported values: The default value isTRUE.REBALANCE_UNSHARDED_DATA: IfTRUE, unsharded data (a.k.a. randomly-sharded) will be rebalanced approximately equally across the cluster. Note that for clusters with large amounts of unsharded data, this data transfer could be time consuming and result in delayed query responses. Supported values: The default value isTRUE.TABLE_INCLUDES: Comma-separated list of unsharded table names to rebalance. Not applicable to sharded tables because they are always rebalanced. Cannot be used simultaneously withTABLE_EXCLUDES. This parameter is ignored ifREBALANCE_UNSHARDED_DATAisFALSE.TABLE_EXCLUDES: Comma-separated list of unsharded table names to not rebalance. Not applicable to sharded tables because they are always rebalanced. Cannot be used simultaneously withTABLE_INCLUDES. This parameter is ignored ifREBALANCE_UNSHARDED_DATAisFALSE.AGGRESSIVENESS: Influences how much data is moved at a time during rebalance. A higherAGGRESSIVENESSwill complete the rebalance faster. A lowerAGGRESSIVENESSwill take longer but allow for better interleaving between the rebalance and other queries. Valid values are constants from 1 (lowest) to 10 (highest). The default value is '10'.COMPACT_AFTER_REBALANCE: Perform compaction of deleted records once the rebalance completes to reclaim memory and disk space. Default isTRUE, unlessREPAIR_INCORRECTLY_SHARDED_DATAis set toTRUE. Supported values: The default value isTRUE.COMPACT_ONLY: If set toTRUE, ignore rebalance options and attempt to perform compaction of deleted records to reclaim memory and disk space without rebalancing first. Supported values: The default value isFALSE.REPAIR_INCORRECTLY_SHARDED_DATA: Scans for any data sharded incorrectly and re-routes the data to the correct location. Only necessary ifadminVerifyDbreports an error in sharding alignment. This can be done as part of a typical rebalance after expanding the cluster or in a standalone fashion when it is believed that data is sharded incorrectly somewhere in the cluster. Compaction will not be performed by default when this is enabled. If this option is set toTRUE, the time necessary to rebalance and the memory used by the rebalance may increase. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminRemoveHost
public AdminRemoveHostResponse adminRemoveHost(AdminRemoveHostRequest request) throws GPUdbException
Removes a host from an existing cluster. If the host to be removed has any ranks running on it, the ranks must be removed usingadminRemoveRanksor manually switched over to a new host usingadminSwitchoverprior to host removal. If the host to be removed has the graph server or SQL planner running on it, these must be manually switched over to a new host usingadminSwitchover.Note: This method should be used for on-premise deployments only.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminRemoveHost
public AdminRemoveHostResponse adminRemoveHost(String host, Map<String,String> options) throws GPUdbException
Removes a host from an existing cluster. If the host to be removed has any ranks running on it, the ranks must be removed usingadminRemoveRanksor manually switched over to a new host usingadminSwitchoverprior to host removal. If the host to be removed has the graph server or SQL planner running on it, these must be manually switched over to a new host usingadminSwitchover.Note: This method should be used for on-premise deployments only.
- Parameters:
host- Identifies the host this applies to. Can be the host address, or formatted as 'hostN' where N is the host number as specified in gpudb.confoptions- Optional parameters.DRY_RUN: If set toTRUE, only validation checks will be performed. No host is removed. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminRemoveRanks
public AdminRemoveRanksResponse adminRemoveRanks(AdminRemoveRanksRequest request) throws GPUdbException
Remove one or more ranks from an existing Kinetica cluster. All data will be rebalanced to other ranks before the rank(s) is removed unless theREBALANCE_SHARDED_DATAorREBALANCE_UNSHARDED_DATAparameters are set toFALSEin theoptions, in which case the corresponding sharded data and/or unsharded data (a.k.a. randomly-sharded) will be deleted.The database must be offline for this operation, see
adminOfflineThis endpoint's processing time depends on the amount of data in the system, thus the API call may time out if run directly. It is recommended to run this endpoint asynchronously via
createJob.Note: This method should be used for on-premise deployments only.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminRemoveRanks
public AdminRemoveRanksResponse adminRemoveRanks(List<String> ranks, Map<String,String> options) throws GPUdbException
Remove one or more ranks from an existing Kinetica cluster. All data will be rebalanced to other ranks before the rank(s) is removed unless theREBALANCE_SHARDED_DATAorREBALANCE_UNSHARDED_DATAparameters are set toFALSEin theoptions, in which case the corresponding sharded data and/or unsharded data (a.k.a. randomly-sharded) will be deleted.The database must be offline for this operation, see
adminOfflineThis endpoint's processing time depends on the amount of data in the system, thus the API call may time out if run directly. It is recommended to run this endpoint asynchronously via
createJob.Note: This method should be used for on-premise deployments only.
- Parameters:
ranks- Each array value designates one or more ranks to remove from the cluster. Values can be formatted as 'rankN' for a specific rank, 'hostN' (from the gpudb.conf file) to remove all ranks on that host, or the host IP address (hostN.address from the gpub.conf file) which also removes all ranks on that host. Rank 0 (the head rank) cannot be removed (but can be moved to another host usingadminSwitchover). At least one worker rank must be left in the cluster after the operation.options- Optional parameters.REBALANCE_SHARDED_DATA: IfTRUE, sharded data will be rebalanced approximately equally across the cluster. Note that for clusters with large amounts of sharded data, this data transfer could be time consuming and result in delayed query responses. Supported values: The default value isTRUE.REBALANCE_UNSHARDED_DATA: IfTRUE, unsharded data (a.k.a. randomly-sharded) will be rebalanced approximately equally across the cluster. Note that for clusters with large amounts of unsharded data, this data transfer could be time consuming and result in delayed query responses. Supported values: The default value isTRUE.AGGRESSIVENESS: Influences how much data is moved at a time during rebalance. A higherAGGRESSIVENESSwill complete the rebalance faster. A lowerAGGRESSIVENESSwill take longer but allow for better interleaving between the rebalance and other queries. Valid values are constants from 1 (lowest) to 10 (highest). The default value is '10'.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminRepairTable
public AdminRepairTableResponse adminRepairTable(AdminRepairTableRequest request) throws GPUdbException
Manually repair a corrupted table. Returns information about affected tables.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminRepairTable
public AdminRepairTableResponse adminRepairTable(List<String> tableNames, Map<String,String> tableTypes, Map<String,String> options) throws GPUdbException
Manually repair a corrupted table. Returns information about affected tables.- Parameters:
tableNames- List of tables to query. An asterisk returns all tables.tableTypes- internal: type_id per table.options- Optional parameters.REPAIR_POLICY: Corrective action to take. Supported values:DELETE_CHUNKS: Deletes any corrupted chunksSHRINK_COLUMNS: Shrinks corrupted chunks to the shortest columnREPLAY_WAL: Manually invokes write-ahead log (WAL) replay on the tableALTER_TABLE: Reset columns modification after incomplete alter column.
VERIFY_ALL: IfFALSEonly table chunk data already known to be corrupted will be repaired. Otherwise the database will perform a full table scan to check for correctness. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminSendAlert
public AdminSendAlertResponse adminSendAlert(AdminSendAlertRequest request) throws GPUdbException
Sends a user generated alert to the monitoring system.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminSendAlert
public AdminSendAlertResponse adminSendAlert(String message, String label, String logLevel, Map<String,String> options) throws GPUdbException
Sends a user generated alert to the monitoring system.- Parameters:
message- Alert message body. The default value is ''.label- Label to add to alert message. The default value is ''.logLevel- Alert message logging criteria. Supported values:options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminShowAlerts
public AdminShowAlertsResponse adminShowAlerts(AdminShowAlertsRequest request) throws GPUdbException
Requests a list of the most recent alerts. Returns lists of alert data, including timestamp and type.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminShowAlerts
public AdminShowAlertsResponse adminShowAlerts(int numAlerts, Map<String,String> options) throws GPUdbException
Requests a list of the most recent alerts. Returns lists of alert data, including timestamp and type.- Parameters:
numAlerts- Number of most recent alerts to request. The response will include up tonumAlertsdepending on how many alerts there are in the system. A value of 0 returns all stored alerts.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminShowClusterOperations
public AdminShowClusterOperationsResponse adminShowClusterOperations(AdminShowClusterOperationsRequest request) throws GPUdbException
Requests the detailed status of the current operation (by default) or a prior cluster operation specified byhistoryIndex. Returns details on the requested cluster operation.The response will also indicate how many cluster operations are stored in the history.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminShowClusterOperations
public AdminShowClusterOperationsResponse adminShowClusterOperations(int historyIndex, Map<String,String> options) throws GPUdbException
Requests the detailed status of the current operation (by default) or a prior cluster operation specified byhistoryIndex. Returns details on the requested cluster operation.The response will also indicate how many cluster operations are stored in the history.
- Parameters:
historyIndex- Indicates which cluster operation to retrieve. Use 0 for the most recent. The default value is 0.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminShowJobs
public AdminShowJobsResponse adminShowJobs(AdminShowJobsRequest request) throws GPUdbException
Get a list of the current jobs in GPUdb.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminShowJobs
public AdminShowJobsResponse adminShowJobs(Map<String,String> options) throws GPUdbException
Get a list of the current jobs in GPUdb.- Parameters:
options- Optional parameters.SHOW_ASYNC_JOBS: IfTRUE, then the completed async jobs are also included in the response. By default, once the async jobs are completed they are no longer included in the jobs list. Supported values: The default value isFALSE.SHOW_WORKER_INFO: IfTRUE, then information is also returned from worker ranks. By default only status from the head rank is returned. Supported values:
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminShowShards
public AdminShowShardsResponse adminShowShards(AdminShowShardsRequest request) throws GPUdbException
Show the mapping of shards to the corresponding rank and tom. The response message contains list of 16384 (total number of shards in the system) Rank and TOM numbers corresponding to each shard.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminShowShards
public AdminShowShardsResponse adminShowShards(Map<String,String> options) throws GPUdbException
Show the mapping of shards to the corresponding rank and tom. The response message contains list of 16384 (total number of shards in the system) Rank and TOM numbers corresponding to each shard.- Parameters:
options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminShutdown
public AdminShutdownResponse adminShutdown(AdminShutdownRequest request) throws GPUdbException
Exits the database server application.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminShutdown
public AdminShutdownResponse adminShutdown(String exitType, String authorization, Map<String,String> options) throws GPUdbException
Exits the database server application.- Parameters:
exitType- Reserved for future use. User can pass an empty string.authorization- No longer used. User can pass an empty string.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminSwitchover
public AdminSwitchoverResponse adminSwitchover(AdminSwitchoverRequest request) throws GPUdbException
Manually switch over one or more processes to another host. Individual ranks or entire hosts may be moved to another host.Note: This method should be used for on-premise deployments only.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminSwitchover
public AdminSwitchoverResponse adminSwitchover(List<String> processes, List<String> destinations, Map<String,String> options) throws GPUdbException
Manually switch over one or more processes to another host. Individual ranks or entire hosts may be moved to another host.Note: This method should be used for on-premise deployments only.
- Parameters:
processes- Indicates the process identifier to switch over to another host. Options are 'hostN' and 'rankN' where 'N' corresponds to the number associated with a host or rank in the Network section of the gpudb.conf file; e.g., 'host[N].address' or 'rank[N].host'. If 'hostN' is provided, all processes on that host will be moved to another host. Each entry in this array will be switched over to the corresponding host entry at the same index indestinations.destinations- Indicates to which host to switch over each corresponding process given inprocesses. Each index must be specified as 'hostN' where 'N' corresponds to the number associated with a host or rank in the Network section of the gpudb.conf file; e.g., 'host[N].address'. Each entry in this array will receive the corresponding process entry at the same index inprocesses.options- Optional parameters.DRY_RUN: If set toTRUE, only validation checks will be performed. Nothing is switched over. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminVerifyDb
public AdminVerifyDbResponse adminVerifyDb(AdminVerifyDbRequest request) throws GPUdbException
Verify database is in a consistent state. When inconsistencies or errors are found, the verified_ok flag in the response is set to false and the list of errors found is provided in the error_list.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
adminVerifyDb
public AdminVerifyDbResponse adminVerifyDb(Map<String,String> options) throws GPUdbException
Verify database is in a consistent state. When inconsistencies or errors are found, the verified_ok flag in the response is set to false and the list of errors found is provided in the error_list.- Parameters:
options- Optional parameters.REBUILD_ON_ERROR: [DEPRECATED -- Use the Rebuild DB feature of GAdmin instead.]. Supported values: The default value isFALSE.VERIFY_NULLS: WhenTRUE, verifies that null values are set to zero. Supported values: The default value isFALSE.VERIFY_PERSIST: WhenTRUE, persistent objects will be compared against their state in memory and workers will be checked for orphaned table data in persist. To check for orphaned worker data, either setCONCURRENT_SAFEinoptionstoTRUEor place the database offline. Supported values: The default value isFALSE.CONCURRENT_SAFE: WhenTRUE, allows this endpoint to be run safely with other concurrent database operations. Other operations may be slower while this is running. Supported values: The default value isTRUE.VERIFY_RANK0: IfTRUE, compare rank0 table metadata against workers' metadata. Supported values: The default value isFALSE.DELETE_ORPHANED_TABLES: IfTRUE, orphaned table directories found on workers for which there is no corresponding metadata will be deleted. It is recommended to run this while the database is offline OR setCONCURRENT_SAFEinoptionstoTRUE. Supported values: The default value isFALSE.VERIFY_ORPHANED_TABLES_ONLY: IfTRUE, only the presence of orphaned table directories will be checked, all persistence and table consistency checks will be skipped. Supported values: The default value isFALSE.TABLE_INCLUDES: Comma-separated list of table names to include when verifying table consistency on wokers. Cannot be used simultaneously withTABLE_EXCLUDES.TABLE_EXCLUDES: Comma-separated list of table names to exclude when verifying table consistency on wokers. Cannot be used simultaneously withTABLE_INCLUDES.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateConvexHull
public AggregateConvexHullResponse aggregateConvexHull(AggregateConvexHullRequest request) throws GPUdbException
Calculates and returns the convex hull for the values in a table specified bytableName.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateConvexHull
public AggregateConvexHullResponse aggregateConvexHull(String tableName, String xColumnName, String yColumnName, Map<String,String> options) throws GPUdbException
Calculates and returns the convex hull for the values in a table specified bytableName.- Parameters:
tableName- Name of table on which the operation will be performed. Must be an existing table, in [schema_name.]table_name format, using standard name resolution rules.xColumnName- Name of the column containing the x coordinates of the points for the operation being performed.yColumnName- Name of the column containing the y coordinates of the points for the operation being performed.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateGroupByRaw
public RawAggregateGroupByResponse aggregateGroupByRaw(AggregateGroupByRequest request) throws GPUdbException
Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination. This is somewhat analogous to an SQL-style SELECT...GROUP BY.For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.
The results can be paged via the
offsetandlimitparameters. For example, to get 10 groups with the largest counts the inputs would be: limit=10, options={"sort_order":"descending", "sort_by":"value"}.optionscan be used to customize behavior of this call e.g. filtering or sorting the results.To group by columns 'x' and 'y' and compute the number of objects within each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use: column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to
HAVING.The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a
RESULT_TABLEname is specified in theoptions, the results are stored in a new table with that name--no results are returned in the response. Both the table name and resulting column names must adhere to standard naming conventions; column/aggregation expressions will need to be aliased. If the source table's shard key is used as the grouping column(s) and all result records are selected (offsetis 0 andlimitis -9999), the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available when any of the values ofcolumnNamesis an unrestricted-length string.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateGroupBy
public AggregateGroupByResponse aggregateGroupBy(AggregateGroupByRequest request) throws GPUdbException
Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination. This is somewhat analogous to an SQL-style SELECT...GROUP BY.For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.
The results can be paged via the
offsetandlimitparameters. For example, to get 10 groups with the largest counts the inputs would be: limit=10, options={"sort_order":"descending", "sort_by":"value"}.optionscan be used to customize behavior of this call e.g. filtering or sorting the results.To group by columns 'x' and 'y' and compute the number of objects within each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use: column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to
HAVING.The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a
RESULT_TABLEname is specified in theoptions, the results are stored in a new table with that name--no results are returned in the response. Both the table name and resulting column names must adhere to standard naming conventions; column/aggregation expressions will need to be aliased. If the source table's shard key is used as the grouping column(s) and all result records are selected (offsetis 0 andlimitis -9999), the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available when any of the values ofcolumnNamesis an unrestricted-length string.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateGroupBy
public AggregateGroupByResponse aggregateGroupBy(String tableName, List<String> columnNames, long offset, long limit, Map<String,String> options) throws GPUdbException
Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination. This is somewhat analogous to an SQL-style SELECT...GROUP BY.For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.
The results can be paged via the
offsetandlimitparameters. For example, to get 10 groups with the largest counts the inputs would be: limit=10, options={"sort_order":"descending", "sort_by":"value"}.optionscan be used to customize behavior of this call e.g. filtering or sorting the results.To group by columns 'x' and 'y' and compute the number of objects within each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use: column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to
HAVING.The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a
RESULT_TABLEname is specified in theoptions, the results are stored in a new table with that name--no results are returned in the response. Both the table name and resulting column names must adhere to standard naming conventions; column/aggregation expressions will need to be aliased. If the source table's shard key is used as the grouping column(s) and all result records are selected (offsetis 0 andlimitis -9999), the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available when any of the values ofcolumnNamesis an unrestricted-length string.- Parameters:
tableName- Name of an existing table or view on which the operation will be performed, in [schema_name.]table_name format, using standard name resolution rules.columnNames- List of one or more column names, expressions, and aggregate expressions.offset- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT.limit- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. UsehasMoreRecordsto see if more records exist in the result to be fetched, andoffsetandlimitto request subsequent pages of results. The default value is -9999.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofRESULT_TABLE. IfRESULT_TABLE_PERSISTisFALSE(or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_RESULT_TABLE_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema as part ofRESULT_TABLEand usecreateSchemato create the schema if non-existent] Name of a schema which is to contain the table specified inRESULT_TABLE. If the schema provided is non-existent, it will be automatically created.EXPRESSION: Filter expression to apply to the table prior to computing the aggregate group by.PIPELINED_EXPRESSION_EVALUATION: evaluate the group-by during last JoinedSet filter plan step. Supported values: The default value isFALSE.HAVING: Filter expression to apply to the aggregated results.SORT_ORDER: [DEPRECATED--use order_by instead] String indicating how the returned values should be sorted - ascending or descending. Supported values:ASCENDING: Indicates that the returned values should be sorted in ascending order.DESCENDING: Indicates that the returned values should be sorted in descending order.
ASCENDING.SORT_BY: [DEPRECATED--use order_by instead] String determining how the results are sorted. Supported values:KEY: Indicates that the returned values should be sorted by key, which corresponds to the grouping columns. If you have multiple grouping columns (and are sorting by key), it will first sort the first grouping column, then the second grouping column, etc.VALUE: Indicates that the returned values should be sorted by value, which corresponds to the aggregates. If you have multiple aggregates (and are sorting by value), it will first sort by the first aggregate, then the second aggregate, etc.
VALUE.ORDER_BY: Comma-separated list of the columns to be sorted by as well as the sort direction, e.g., 'timestamp asc, x desc'. The default value is ''.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for the result table's columns.RESULT_TABLE: The name of a table used to store the results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. Column names (group-by and aggregate fields) need to be given aliases e.g. ["FChar256 as fchar256", "sum(FDouble) as sfd"]. If present, no results are returned in the response. This option is not available if one of the grouping attributes is an unrestricted string (i.e.; not charN) type.RESULT_TABLE_PERSIST: IfTRUE, then the result table specified inRESULT_TABLEwill be persisted and will not expire unless aTTLis specified. IfFALSE, then the result table will be an in-memory table and will expire unless aTTLis specified otherwise. Supported values: The default value isFALSE.RESULT_TABLE_FORCE_REPLICATED: Force the result table to be replicated (ignores any sharding). Must be used in combination with theRESULT_TABLEoption. Supported values: The default value isFALSE.RESULT_TABLE_GENERATE_PK: IfTRUEthen set a primary key for the result table. Must be used in combination with theRESULT_TABLEoption. Supported values: The default value isFALSE.RESULT_TABLE_GENERATE_SOFT_PK: IfTRUEthen set a soft primary key for the result table. Must be used in combination with theRESULT_TABLEoption. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified inRESULT_TABLE.CHUNK_SIZE: Indicates the number of records per chunk to be used for the result table. Must be used in combination with theRESULT_TABLEoption.CHUNK_COLUMN_MAX_MEMORY: Indicates the target maximum data size for each column in a chunk to be used for the result table. Must be used in combination with theRESULT_TABLEoption.CHUNK_MAX_MEMORY: Indicates the target maximum data size for all columns in a chunk to be used for the result table. Must be used in combination with theRESULT_TABLEoption.CREATE_INDEXES: Comma-separated list of columns on which to create indexes on the result table. Must be used in combination with theRESULT_TABLEoption.VIEW_ID: ID of view of which the result table will be a member. The default value is ''.PIVOT: pivot columnPIVOT_VALUES: The value list provided will become the column headers in the output. Should be the values from the pivot_column.GROUPING_SETS: Customize the grouping attribute sets to compute the aggregates. These sets can include ROLLUP or CUBE operators. The attribute sets should be enclosed in parentheses and can include composite attributes. All attributes specified in the grouping sets must present in the group-by attributes.ROLLUP: This option is used to specify the multilevel aggregates.CUBE: This option is used to specify the multidimensional aggregates.SHARD_KEY: Comma-separated list of the columns to be sharded on; e.g. 'column1, column2'. The columns specified must be present incolumnNames. If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateHistogram
public AggregateHistogramResponse aggregateHistogram(AggregateHistogramRequest request) throws GPUdbException
Performs a histogram calculation given a table, a column, and an interval function. Theintervalis used to produce bins of that size and the result, computed over the records falling within each bin, is returned. For each bin, the start value is inclusive, but the end value is exclusive--except for the very last bin for which the end value is also inclusive. The value returned for each bin is the number of records in it, except when a column name is provided as aVALUE_COLUMN. In this latter case the sum of the values corresponding to theVALUE_COLUMNis used as the result instead. The total number of bins requested cannot exceed 10,000.NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service a request that specifies a
VALUE_COLUMN.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateHistogram
public AggregateHistogramResponse aggregateHistogram(String tableName, String columnName, double start, double end, double interval, Map<String,String> options) throws GPUdbException
Performs a histogram calculation given a table, a column, and an interval function. Theintervalis used to produce bins of that size and the result, computed over the records falling within each bin, is returned. For each bin, the start value is inclusive, but the end value is exclusive--except for the very last bin for which the end value is also inclusive. The value returned for each bin is the number of records in it, except when a column name is provided as aVALUE_COLUMN. In this latter case the sum of the values corresponding to theVALUE_COLUMNis used as the result instead. The total number of bins requested cannot exceed 10,000.NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service a request that specifies a
VALUE_COLUMN.- Parameters:
tableName- Name of the table on which the operation will be performed. Must be an existing table, in [schema_name.]table_name format, using standard name resolution rules.columnName- Name of a column or an expression of one or more column names over which the histogram will be calculated.start- Lower end value of the histogram interval, inclusive.end- Upper end value of the histogram interval, inclusive.interval- The size of each bin within the start and end parameters.options- Optional parameters.VALUE_COLUMN: The name of the column to use when calculating the bin values (values are summed). The column must be a numerical type (int, double, long, float).START: The start parameter for char types.END: The end parameter for char types.INTERVAL: The interval parameter for char types.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateKMeans
public AggregateKMeansResponse aggregateKMeans(AggregateKMeansRequest request) throws GPUdbException
This endpoint runs the k-means algorithm - a heuristic algorithm that attempts to do k-means clustering. An ideal k-means clustering algorithm selects k points such that the sum of the mean squared distances of each member of the set to the nearest of the k points is minimized. The k-means algorithm however does not necessarily produce such an ideal cluster. It begins with a randomly selected set of k points and then refines the location of the points iteratively and settles to a local minimum. Various parameters and options are provided to control the heuristic search.NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateKMeans
public AggregateKMeansResponse aggregateKMeans(String tableName, List<String> columnNames, int k, double tolerance, Map<String,String> options) throws GPUdbException
This endpoint runs the k-means algorithm - a heuristic algorithm that attempts to do k-means clustering. An ideal k-means clustering algorithm selects k points such that the sum of the mean squared distances of each member of the set to the nearest of the k points is minimized. The k-means algorithm however does not necessarily produce such an ideal cluster. It begins with a randomly selected set of k points and then refines the location of the points iteratively and settles to a local minimum. Various parameters and options are provided to control the heuristic search.NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.
- Parameters:
tableName- Name of the table on which the operation will be performed. Must be an existing table, in [schema_name.]table_name format, using standard name resolution rules.columnNames- List of column names on which the operation would be performed. If n columns are provided then each of the k result points will have n dimensions corresponding to the n columns.k- The number of mean points to be determined by the algorithm.tolerance- Stop iterating when the distances between successive points is less than the given tolerance.options- Optional parameters.WHITEN: When set to 1 each of the columns is first normalized by its stdv - default is not to whiten.MAX_ITERS: Number of times to try to hit the tolerance limit before giving up - default is 10.NUM_TRIES: Number of times to run the k-means algorithm with a different randomly selected starting points - helps avoid local minimum. Default is 1.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofRESULT_TABLE. IfRESULT_TABLE_PERSISTisFALSE(or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_RESULT_TABLE_NAME. Supported values: The default value isFALSE.RESULT_TABLE: The name of a table used to store the results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. If this option is specified, the results are not returned in the response.RESULT_TABLE_PERSIST: IfTRUE, then the result table specified inRESULT_TABLEwill be persisted and will not expire unless aTTLis specified. IfFALSE, then the result table will be an in-memory table and will expire unless aTTLis specified otherwise. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified inRESULT_TABLE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateMinMax
public AggregateMinMaxResponse aggregateMinMax(AggregateMinMaxRequest request) throws GPUdbException
Calculates and returns the minimum and maximum values of a particular column in a table.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateMinMax
public AggregateMinMaxResponse aggregateMinMax(String tableName, String columnName, Map<String,String> options) throws GPUdbException
Calculates and returns the minimum and maximum values of a particular column in a table.- Parameters:
tableName- Name of the table on which the operation will be performed. Must be an existing table, in [schema_name.]table_name format, using standard name resolution rules.columnName- Name of a column or an expression of one or more column on which the min-max will be calculated.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateMinMaxGeometry
public AggregateMinMaxGeometryResponse aggregateMinMaxGeometry(AggregateMinMaxGeometryRequest request) throws GPUdbException
Calculates and returns the minimum and maximum x- and y-coordinates of a particular geospatial geometry column in a table.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateMinMaxGeometry
public AggregateMinMaxGeometryResponse aggregateMinMaxGeometry(String tableName, String columnName, Map<String,String> options) throws GPUdbException
Calculates and returns the minimum and maximum x- and y-coordinates of a particular geospatial geometry column in a table.- Parameters:
tableName- Name of the table on which the operation will be performed. Must be an existing table, in [schema_name.]table_name format, using standard name resolution rules.columnName- Name of a geospatial geometry column on which the min-max will be calculated.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateStatistics
public AggregateStatisticsResponse aggregateStatistics(AggregateStatisticsRequest request) throws GPUdbException
Calculates the requested statistics of the given column(s) in a given table.The available statistics are:
COUNT(number of total objects),MEAN,STDV(standard deviation),VARIANCE,SKEW,KURTOSIS,SUM,MIN,MAX,WEIGHTED_AVERAGE,CARDINALITY(unique count),ESTIMATED_CARDINALITY,PERCENTILE, andPERCENTILE_RANK.Estimated cardinality is calculated by using the hyperloglog approximation technique.
Percentiles and percentile ranks are approximate and are calculated using the t-digest algorithm. They must include the desired
PERCENTILE/PERCENTILE_RANK. To compute multiple percentiles each value must be specified separately (i.e. 'percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)').A second, comma-separated value can be added to the
PERCENTILEstatistic to calculate percentile resolution, e.g., a 50th percentile with 200 resolution would be 'percentile(50,200)'.The weighted average statistic requires a weight column to be specified in
WEIGHT_COLUMN_NAME. The weighted average is then defined as the sum of the products ofcolumnNametimes theWEIGHT_COLUMN_NAMEvalues divided by the sum of theWEIGHT_COLUMN_NAMEvalues.Additional columns can be used in the calculation of statistics via
ADDITIONAL_COLUMN_NAMES. Values in these columns will be included in the overall aggregate calculation--individual aggregates will not be calculated per additional column. For instance, requesting theCOUNTandMEANofcolumnNamex andADDITIONAL_COLUMN_NAMESy and z, where x holds the numbers 1-10, y holds 11-20, and z holds 21-30, would return the total number of x, y, and z values (30), and the single average value across all x, y, and z values (15.5).The response includes a list of key/value pairs of each statistic requested and its corresponding value.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateStatistics
public AggregateStatisticsResponse aggregateStatistics(String tableName, String columnName, String stats, Map<String,String> options) throws GPUdbException
Calculates the requested statistics of the given column(s) in a given table.The available statistics are:
COUNT(number of total objects),MEAN,STDV(standard deviation),VARIANCE,SKEW,KURTOSIS,SUM,MIN,MAX,WEIGHTED_AVERAGE,CARDINALITY(unique count),ESTIMATED_CARDINALITY,PERCENTILE, andPERCENTILE_RANK.Estimated cardinality is calculated by using the hyperloglog approximation technique.
Percentiles and percentile ranks are approximate and are calculated using the t-digest algorithm. They must include the desired
PERCENTILE/PERCENTILE_RANK. To compute multiple percentiles each value must be specified separately (i.e. 'percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)').A second, comma-separated value can be added to the
PERCENTILEstatistic to calculate percentile resolution, e.g., a 50th percentile with 200 resolution would be 'percentile(50,200)'.The weighted average statistic requires a weight column to be specified in
WEIGHT_COLUMN_NAME. The weighted average is then defined as the sum of the products ofcolumnNametimes theWEIGHT_COLUMN_NAMEvalues divided by the sum of theWEIGHT_COLUMN_NAMEvalues.Additional columns can be used in the calculation of statistics via
ADDITIONAL_COLUMN_NAMES. Values in these columns will be included in the overall aggregate calculation--individual aggregates will not be calculated per additional column. For instance, requesting theCOUNTandMEANofcolumnNamex andADDITIONAL_COLUMN_NAMESy and z, where x holds the numbers 1-10, y holds 11-20, and z holds 21-30, would return the total number of x, y, and z values (30), and the single average value across all x, y, and z values (15.5).The response includes a list of key/value pairs of each statistic requested and its corresponding value.
- Parameters:
tableName- Name of the table on which the statistics operation will be performed, in [schema_name.]table_name format, using standard name resolution rules.columnName- Name of the primary column for which the statistics are to be calculated.stats- Comma separated list of the statistics to calculate, e.g. "sum,mean". Supported values:COUNT: Number of objects (independent of the given column(s)).MEAN: Arithmetic mean (average), equivalent to sum/count.STDV: Sample standard deviation (denominator is count-1).VARIANCE: Unbiased sample variance (denominator is count-1).SKEW: Skewness (third standardized moment).KURTOSIS: Kurtosis (fourth standardized moment).SUM: Sum of all values in the column(s).MIN: Minimum value of the column(s).MAX: Maximum value of the column(s).WEIGHTED_AVERAGE: Weighted arithmetic mean (using the optionWEIGHT_COLUMN_NAMEas the weighting column).CARDINALITY: Number of unique values in the column(s).ESTIMATED_CARDINALITY: Estimate (via hyperloglog technique) of the number of unique values in the column(s).PERCENTILE: Estimate (via t-digest) of the given percentile of the column(s) (percentile(50.0) will be an approximation of the median). Add a second, comma-separated value to calculate percentile resolution, e.g., 'percentile(75,150)'PERCENTILE_RANK: Estimate (via t-digest) of the percentile rank of the given value in the column(s) (if the given value is the median of the column(s), percentile_rank(<median>) will return approximately 50.0).
options- Optional parameters.ADDITIONAL_COLUMN_NAMES: A list of comma separated column names over which statistics can be accumulated along with the primary column. All columns listed andcolumnNamemust be of the same type. Must not include the column specified incolumnNameand no column can be listed twice.WEIGHT_COLUMN_NAME: Name of column used as weighting attribute for the weighted average statistic.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateStatisticsByRange
public AggregateStatisticsByRangeResponse aggregateStatisticsByRange(AggregateStatisticsByRangeRequest request) throws GPUdbException
Divides the given set into bins and calculates statistics of the values of a value-column in each bin. The bins are based on the values of a given binning-column. The statistics that may be requested are mean, stdv (standard deviation), variance, skew, kurtosis, sum, min, max, first, last and weighted average. In addition to the requested statistics the count of total samples in each bin is returned. This counts vector is just the histogram of the column used to divide the set members into bins. The weighted average statistic requires a weight column to be specified inWEIGHT_COLUMN_NAME. The weighted average is then defined as the sum of the products of the value column times the weight column divided by the sum of the weight column.There are two methods for binning the set members. In the first, which can be used for numeric valued binning-columns, a min, max and interval are specified. The number of bins, nbins, is the integer upper bound of (max-min)/interval. Values that fall in the range [min+n*interval,min+(n+1)*interval) are placed in the nth bin where n ranges from 0..nbin-2. The final bin is [min+(nbin-1)*interval,max]. In the second method,
BIN_VALUESspecifies a list of binning column values. Binning-columns whose value matches the nth member of theBIN_VALUESlist are placed in the nth bin. When a list is provided, the binning-column must be of type string or int.NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateStatisticsByRange
public AggregateStatisticsByRangeResponse aggregateStatisticsByRange(String tableName, String selectExpression, String columnName, String valueColumnName, String stats, double start, double end, double interval, Map<String,String> options) throws GPUdbException
Divides the given set into bins and calculates statistics of the values of a value-column in each bin. The bins are based on the values of a given binning-column. The statistics that may be requested are mean, stdv (standard deviation), variance, skew, kurtosis, sum, min, max, first, last and weighted average. In addition to the requested statistics the count of total samples in each bin is returned. This counts vector is just the histogram of the column used to divide the set members into bins. The weighted average statistic requires a weight column to be specified inWEIGHT_COLUMN_NAME. The weighted average is then defined as the sum of the products of the value column times the weight column divided by the sum of the weight column.There are two methods for binning the set members. In the first, which can be used for numeric valued binning-columns, a min, max and interval are specified. The number of bins, nbins, is the integer upper bound of (max-min)/interval. Values that fall in the range [min+n*interval,min+(n+1)*interval) are placed in the nth bin where n ranges from 0..nbin-2. The final bin is [min+(nbin-1)*interval,max]. In the second method,
BIN_VALUESspecifies a list of binning column values. Binning-columns whose value matches the nth member of theBIN_VALUESlist are placed in the nth bin. When a list is provided, the binning-column must be of type string or int.NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.
- Parameters:
tableName- Name of the table on which the ranged-statistics operation will be performed, in [schema_name.]table_name format, using standard name resolution rules.selectExpression- For a non-empty expression statistics are calculated for those records for which the expression is true. The default value is ''.columnName- Name of the binning-column used to divide the set samples into bins.valueColumnName- Name of the value-column for which statistics are to be computed.stats- A string of comma separated list of the statistics to calculate, e.g. 'sum,mean'. Available statistics: mean, stdv (standard deviation), variance, skew, kurtosis, sum.start- The lower bound of the binning-column.end- The upper bound of the binning-column.interval- The interval of a bin. Set members fall into bin i if the binning-column falls in the range [start+interval*i, start+interval*(i+1)).options- Map of optional parameters:ADDITIONAL_COLUMN_NAMES: A list of comma separated value-column names over which statistics can be accumulated along with the primary value_column.BIN_VALUES: A list of comma separated binning-column values. Values that match the nth bin_values value are placed in the nth bin.WEIGHT_COLUMN_NAME: Name of the column used as weighting column for the weighted_average statistic.ORDER_COLUMN_NAME: Name of the column used for candlestick charting techniques.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateUniqueRaw
public RawAggregateUniqueResponse aggregateUniqueRaw(AggregateUniqueRequest request) throws GPUdbException
Returns all the unique values from a particular column (specified bycolumnName) of a particular table or view (specified bytableName). IfcolumnNameis a numeric column, the values will be indata. Otherwise ifcolumnNameis a string column, the values will be injsonEncodedResponse. The results can be paged viaoffsetandlimitparameters.Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order
optionswould be:{"limit":"10","sort_order":"descending"}The response is returned as a dynamic schema. For details see: dynamic schemas documentation.If a
RESULT_TABLEname is specified in theoptions, the results are stored in a new table with that name--no results are returned in the response. Both the table name and resulting column name must adhere to standard naming conventions; any column expression will need to be aliased. If the source table's shard key is used as thecolumnName, the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available if the value ofcolumnNameis an unrestricted-length string.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateUnique
public AggregateUniqueResponse aggregateUnique(AggregateUniqueRequest request) throws GPUdbException
Returns all the unique values from a particular column (specified bycolumnName) of a particular table or view (specified bytableName). IfcolumnNameis a numeric column, the values will be indata. Otherwise ifcolumnNameis a string column, the values will be injsonEncodedResponse. The results can be paged viaoffsetandlimitparameters.Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order
optionswould be:{"limit":"10","sort_order":"descending"}The response is returned as a dynamic schema. For details see: dynamic schemas documentation.If a
RESULT_TABLEname is specified in theoptions, the results are stored in a new table with that name--no results are returned in the response. Both the table name and resulting column name must adhere to standard naming conventions; any column expression will need to be aliased. If the source table's shard key is used as thecolumnName, the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available if the value ofcolumnNameis an unrestricted-length string.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateUnique
public AggregateUniqueResponse aggregateUnique(String tableName, String columnName, long offset, long limit, Map<String,String> options) throws GPUdbException
Returns all the unique values from a particular column (specified bycolumnName) of a particular table or view (specified bytableName). IfcolumnNameis a numeric column, the values will be indata. Otherwise ifcolumnNameis a string column, the values will be injsonEncodedResponse. The results can be paged viaoffsetandlimitparameters.Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order
optionswould be:{"limit":"10","sort_order":"descending"}The response is returned as a dynamic schema. For details see: dynamic schemas documentation.If a
RESULT_TABLEname is specified in theoptions, the results are stored in a new table with that name--no results are returned in the response. Both the table name and resulting column name must adhere to standard naming conventions; any column expression will need to be aliased. If the source table's shard key is used as thecolumnName, the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available if the value ofcolumnNameis an unrestricted-length string.- Parameters:
tableName- Name of an existing table or view on which the operation will be performed, in [schema_name.]table_name format, using standard name resolution rules.columnName- Name of the column or an expression containing one or more column names on which the unique function would be applied.offset- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT.limit- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. UsehasMoreRecordsto see if more records exist in the result to be fetched, andoffsetandlimitto request subsequent pages of results. The default value is -9999.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofRESULT_TABLE. IfRESULT_TABLE_PERSISTisFALSE(or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_RESULT_TABLE_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema as part ofRESULT_TABLEand usecreateSchemato create the schema if non-existent] Name of a schema which is to contain the table specified inRESULT_TABLE. If the schema provided is non-existent, it will be automatically created.EXPRESSION: Optional filter expression to apply to the table.SORT_ORDER: String indicating how the returned values should be sorted. Supported values: The default value isASCENDING.ORDER_BY: Comma-separated list of the columns to be sorted by as well as the sort direction, e.g., 'timestamp asc, x desc'. The default value is ''.RESULT_TABLE: The name of the table used to store the results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. If present, no results are returned in the response. Not available ifcolumnNameis an unrestricted-length string.RESULT_TABLE_PERSIST: IfTRUE, then the result table specified inRESULT_TABLEwill be persisted and will not expire unless aTTLis specified. IfFALSE, then the result table will be an in-memory table and will expire unless aTTLis specified otherwise. Supported values: The default value isFALSE.RESULT_TABLE_FORCE_REPLICATED: Force the result table to be replicated (ignores any sharding). Must be used in combination with theRESULT_TABLEoption. Supported values: The default value isFALSE.RESULT_TABLE_GENERATE_PK: IfTRUEthen set a primary key for the result table. Must be used in combination with theRESULT_TABLEoption. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified inRESULT_TABLE.CHUNK_SIZE: Indicates the number of records per chunk to be used for the result table. Must be used in combination with theRESULT_TABLEoption.CHUNK_COLUMN_MAX_MEMORY: Indicates the target maximum data size for each column in a chunk to be used for the result table. Must be used in combination with theRESULT_TABLEoption.CHUNK_MAX_MEMORY: Indicates the target maximum data size for all columns in a chunk to be used for the result table. Must be used in combination with theRESULT_TABLEoption.COMPRESSION_CODEC: The default compression codec for the result table's columns.VIEW_ID: ID of view of which the result table will be a member. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateUnpivotRaw
public RawAggregateUnpivotResponse aggregateUnpivotRaw(AggregateUnpivotRequest request) throws GPUdbException
Rotate the column values into rows values.For unpivot details and examples, see Unpivot. For limitations, see Unpivot Limitations.
Unpivot is used to normalize tables that are built for cross tabular reporting purposes. The unpivot operator rotates the column values for all the pivoted columns. A variable column, value column and all columns from the source table except the unpivot columns are projected into the result table. The variable column and value columns in the result table indicate the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateUnpivot
public AggregateUnpivotResponse aggregateUnpivot(AggregateUnpivotRequest request) throws GPUdbException
Rotate the column values into rows values.For unpivot details and examples, see Unpivot. For limitations, see Unpivot Limitations.
Unpivot is used to normalize tables that are built for cross tabular reporting purposes. The unpivot operator rotates the column values for all the pivoted columns. A variable column, value column and all columns from the source table except the unpivot columns are projected into the result table. The variable column and value columns in the result table indicate the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
aggregateUnpivot
public AggregateUnpivotResponse aggregateUnpivot(String tableName, List<String> columnNames, String variableColumnName, String valueColumnName, List<String> pivotedColumns, Map<String,String> options) throws GPUdbException
Rotate the column values into rows values.For unpivot details and examples, see Unpivot. For limitations, see Unpivot Limitations.
Unpivot is used to normalize tables that are built for cross tabular reporting purposes. The unpivot operator rotates the column values for all the pivoted columns. A variable column, value column and all columns from the source table except the unpivot columns are projected into the result table. The variable column and value columns in the result table indicate the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
- Parameters:
tableName- Name of the table on which the operation will be performed. Must be an existing table/view, in [schema_name.]table_name format, using standard name resolution rules.columnNames- List of column names or expressions. A wildcard '*' can be used to include all the non-pivoted columns from the source table.variableColumnName- Specifies the variable/parameter column name. The default value is ''.valueColumnName- Specifies the value column name. The default value is ''.pivotedColumns- List of one or more values typically the column names of the input table. All the columns in the source table must have the same data type.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofRESULT_TABLE. IfRESULT_TABLE_PERSISTisFALSE(or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_RESULT_TABLE_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema as part ofRESULT_TABLEand usecreateSchemato create the schema if non-existent] Name of a schema which is to contain the table specified inRESULT_TABLE. If the schema is non-existent, it will be automatically created.RESULT_TABLE: The name of a table used to store the results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. If present, no results are returned in the response.RESULT_TABLE_PERSIST: IfTRUE, then the result table specified inRESULT_TABLEwill be persisted and will not expire unless aTTLis specified. IfFALSE, then the result table will be an in-memory table and will expire unless aTTLis specified otherwise. Supported values: The default value isFALSE.EXPRESSION: Filter expression to apply to the table prior to unpivot processing.ORDER_BY: Comma-separated list of the columns to be sorted by; e.g. 'timestamp asc, x desc'. The columns specified must be present in input table. If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ''.CHUNK_SIZE: Indicates the number of records per chunk to be used for the result table. Must be used in combination with theRESULT_TABLEoption.CHUNK_COLUMN_MAX_MEMORY: Indicates the target maximum data size for each column in a chunk to be used for the result table. Must be used in combination with theRESULT_TABLEoption.CHUNK_MAX_MEMORY: Indicates the target maximum data size for all columns in a chunk to be used for the result table. Must be used in combination with theRESULT_TABLEoption.COMPRESSION_CODEC: The default compression codec for the result table's columns.LIMIT: The number of records to keep. The default value is ''.TTL: Sets the TTL of the table specified inRESULT_TABLE.VIEW_ID: view this result table is part of. The default value is ''.CREATE_INDEXES: Comma-separated list of columns on which to create indexes on the table specified inRESULT_TABLE. The columns specified must be present in output column names. If any alias is given for any column name, the alias must be used, rather than the original column name.RESULT_TABLE_FORCE_REPLICATED: Force the result table to be replicated (ignores any sharding). Must be used in combination with theRESULT_TABLEoption. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterBackup
public AlterBackupResponse alterBackup(AlterBackupRequest request) throws GPUdbException
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterBackup
public AlterBackupResponse alterBackup(String backupName, String action, String value, String datasinkName, Map<String,String> options) throws GPUdbException
- Parameters:
backupName- Name of the backup to be altered.action- Operation to be applied. Supported values:CHECKSUM: Calculate checksum for backed-up files.DDL_ONLY: Whether or not to only save DDL and not back up table data, when taking future snapshots; setvalueto 'true' or 'false' for DDL only or DDL and table data, respectively.MAX_INCREMENTAL_BACKUPS_TO_KEEP: Maximum number of incremental snapshots to keep, when taking future snapshots; setvalueto the number of snapshots to keep.MERGE: Merges all snapshots within a backup and creates a single full snapshot.PURGE: Deletes a snapshot from a backup; setvalueto the snapshot ID to purge.
value- Value of the modification, depending onaction.datasinkName- Data sink through which the backup is accessible.options- Optional parameters.COMMENT: Comments to store with the backup.DRY_RUN: Whether or not to perform a dry run of a backup alteration. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterCredential
public AlterCredentialResponse alterCredential(AlterCredentialRequest request) throws GPUdbException
Alter the properties of an existing credential.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterCredential
public AlterCredentialResponse alterCredential(String credentialName, Map<String,String> credentialUpdatesMap, Map<String,String> options) throws GPUdbException
Alter the properties of an existing credential.- Parameters:
credentialName- Name of the credential to be altered. Must be an existing credential.credentialUpdatesMap- Map containing the properties of the credential to be updated. Error if empty.TYPE: New type for the credential. Supported values:IDENTITY: New user for the credentialSECRET: New password for the credentialSCHEMA_NAME: Updates the schema name. IfSCHEMA_NAMEdoesn't exist, an error will be thrown. IfSCHEMA_NAMEis empty, then the user's default schema will be used.
options- Optional parameters.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterDatasink
public AlterDatasinkResponse alterDatasink(AlterDatasinkRequest request) throws GPUdbException
Alters the properties of an existing data sink- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterDatasink
public AlterDatasinkResponse alterDatasink(String name, Map<String,String> datasinkUpdatesMap, Map<String,String> options) throws GPUdbException
Alters the properties of an existing data sink- Parameters:
name- Name of the data sink to be altered. Must be an existing data sink.datasinkUpdatesMap- Map containing the properties of the data sink to be updated. Error if empty.DESTINATION: Destination for the output data in format 'destination_type://path[:port]'. Supported destination types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka', and 's3'.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this sinkWAIT_TIMEOUT: Timeout in seconds for waiting for a response from this sinkCREDENTIAL: Name of the credential object to be used in this data sinkS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sinkS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 sink. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataS3_ENCRYPTION_TYPE: Server side encryption typeS3_KMS_KEY_ID: KMS keyHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sinkAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sinkAZURE_OAUTH_TOKEN: Oauth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sinkGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sinkGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sinkJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_URL: The publicly-accessible full path URL to the kafka broker, e.g., 'http://172.123.45.67:9300'.KAFKA_TOPIC_NAME: Name of the Kafka topic to use for this data sink, if it references a Kafka brokerANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasink if true, otherwise use http. Supported values: The default value isTRUE.MAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.JSON_FORMAT: The desired format of JSON encoded notifications message. Supported values: The default value isFLAT.SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value isFALSE.SCHEMA_NAME: Updates the schema name. IfSCHEMA_NAMEdoesn't exist, an error will be thrown. IfSCHEMA_NAMEis empty, then the user's default schema will be used.
options- Optional parameters.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterDatasource
public AlterDatasourceResponse alterDatasource(AlterDatasourceRequest request) throws GPUdbException
Alters the properties of an existing data source- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterDatasource
public AlterDatasourceResponse alterDatasource(String name, Map<String,String> datasourceUpdatesMap, Map<String,String> options) throws GPUdbException
Alters the properties of an existing data source- Parameters:
name- Name of the data source to be altered. Must be an existing data source.datasourceUpdatesMap- Map containing the properties of the data source to be updated. Error if empty.LOCATION: Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.USER_NAME: Name of the remote system user; may be an empty stringPASSWORD: Password for the remote system user; may be an empty stringSKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value isFALSE.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage providerWAIT_TIMEOUT: Timeout in seconds for reading from this storage providerCREDENTIAL: Name of the credential object to be used in data sourceS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sourceS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sourceAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sourceAZURE_OAUTH_TOKEN: OAuth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sourceGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sourceGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sourceJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_URL: The publicly-accessible full path URL to the Kafka broker, e.g., 'http://172.123.45.67:9300'.KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data sourceANONYMOUS: Create an anonymous connection to the storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value isTRUE.SCHEMA_NAME: Updates the schema name. IfSCHEMA_NAMEdoesn't exist, an error will be thrown. IfSCHEMA_NAMEis empty, then the user's default schema will be used.SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).
options- Optional parameters.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterDirectory
public AlterDirectoryResponse alterDirectory(AlterDirectoryRequest request) throws GPUdbException
Alters an existing directory in KiFS.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterDirectory
public AlterDirectoryResponse alterDirectory(String directoryName, Map<String,String> directoryUpdatesMap, Map<String,String> options) throws GPUdbException
Alters an existing directory in KiFS.- Parameters:
directoryName- Name of the directory in KiFS to be altered.directoryUpdatesMap- Map containing the properties of the directory to be altered. Error if empty.DATA_LIMIT: The maximum capacity, in bytes, to apply to the directory. Set to -1 to indicate no upper limit.
options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterEnvironment
public AlterEnvironmentResponse alterEnvironment(AlterEnvironmentRequest request) throws GPUdbException
Alters an existing environment which can be referenced by a user-defined function (UDF).- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterEnvironment
public AlterEnvironmentResponse alterEnvironment(String environmentName, String action, String value, Map<String,String> options) throws GPUdbException
Alters an existing environment which can be referenced by a user-defined function (UDF).- Parameters:
environmentName- Name of the environment to be altered.action- Modification operation to be applied. Supported values:INSTALL_PACKAGE: Install a python package from PyPI, an external data source or KiFSINSTALL_REQUIREMENTS: Install packages from a requirements fileUNINSTALL_PACKAGE: Uninstall a python package.UNINSTALL_REQUIREMENTS: Uninstall packages from a requirements fileRESET: Uninstalls all packages in the environment and resets it to the original state at time of creationREBUILD: Recreates the environment and re-installs all packages, upgrades the packages if necessary based on dependencies
value- The value of the modification, depending onaction. For example, ifactionisINSTALL_PACKAGE, this would be the python package name. IfactionisINSTALL_REQUIREMENTS, this would be the path of a requirements file from which to install packages. If an external data source is specified inDATASOURCE_NAME, this can be the path to a wheel file or source archive. Alternatively, if installing from a file (wheel or source archive), the value may be a reference to a file in KiFS.options- Optional parameters.DATASOURCE_NAME: Name of an existing external data source from which packages specified invaluecan be loaded
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterGraph
public AlterGraphResponse alterGraph(AlterGraphRequest request) throws GPUdbException
- Throws:
GPUdbException
-
alterGraph
public AlterGraphResponse alterGraph(String graphName, String action, String actionArg, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
alterModel
public AlterModelResponse alterModel(AlterModelRequest request) throws GPUdbException
- Throws:
GPUdbException
-
alterModel
public AlterModelResponse alterModel(String modelName, String action, String value, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
alterResourceGroup
public AlterResourceGroupResponse alterResourceGroup(AlterResourceGroupRequest request) throws GPUdbException
Alters the properties of an existing resource group to facilitate resource management.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterResourceGroup
public AlterResourceGroupResponse alterResourceGroup(String name, Map<String,Map<String,String>> tierAttributes, String ranking, String adjoiningResourceGroup, Map<String,String> options) throws GPUdbException
Alters the properties of an existing resource group to facilitate resource management.- Parameters:
name- Name of the group to be altered. Must be an existing resource group name or an empty string when used in conjunction withIS_DEFAULT_GROUP.tierAttributes- Optional map containing tier names and their respective attribute group limits. The only valid attribute limit that can be set is max_memory (in bytes) for the VRAM and RAM tiers. For instance, to set max VRAM capacity to 1GB per rank per GPU and max RAM capacity to 10GB per rank, use: {'VRAM':{'max_memory':'1000000000'}, 'RAM':{'max_memory':'10000000000'}}.MAX_MEMORY: Maximum amount of memory usable at one time, per rank, per GPU, for the VRAM tier; or maximum amount of memory usable at one time, per rank, for the RAM tier.
Map.ranking- If the resource group ranking is to be updated, this indicates the relative ranking among existing resource groups where this resource group will be placed. Supported values:EMPTY_STRING: Don't change the rankingFIRST: Make this resource group the new first one in the orderingLAST: Make this resource group the new last one in the orderingBEFORE: Place this resource group before the one specified byadjoiningResourceGroupin the orderingAFTER: Place this resource group after the one specified byadjoiningResourceGroupin the ordering
EMPTY_STRING.adjoiningResourceGroup- IfrankingisBEFOREorAFTER, this field indicates the resource group before or after which the current group will be placed; otherwise, leave blank. The default value is ''.options- Optional parameters.MAX_CPU_CONCURRENCY: Maximum number of simultaneous threads that will be used to execute a request, per rank, for this group. The minimum allowed value is '4'.MAX_DATA: Maximum amount of data, per rank, in bytes, that can be used by all database objects within this group. Set to -1 to indicate no upper limit. The minimum allowed value is '-1'.MAX_SCHEDULING_PRIORITY: Maximum priority of a scheduled task for this group. The minimum allowed value is '1'. The maximum allowed value is '100'.MAX_TIER_PRIORITY: Maximum priority of a tiered object for this group. The minimum allowed value is '1'. The maximum allowed value is '10'.IS_DEFAULT_GROUP: IfTRUE, this request applies to the global default resource group. It is an error for this field to beTRUEwhen thenamefield is also populated. Supported values: The default value isFALSE.PERSIST: IfTRUEand a system-level change was requested, the system configuration will be written to disk upon successful application of this request. This will commit the changes from this request and any additional in-memory modifications. Supported values: The default value isTRUE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterRole
public AlterRoleResponse alterRole(AlterRoleRequest request) throws GPUdbException
Alters a Role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterRole
public AlterRoleResponse alterRole(String name, String action, String value, Map<String,String> options) throws GPUdbException
Alters a Role.- Parameters:
name- Name of the role to be altered. Must be an existing role.action- Modification operation to be applied to the role. Supported values:SET_COMMENT: Sets the comment for an internal role.SET_RESOURCE_GROUP: Sets the resource group for an internal role. The resource group must exist, otherwise, an empty string assigns the role to the default resource group.
value- The value of the modification, depending onaction.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterSchema
public AlterSchemaResponse alterSchema(AlterSchemaRequest request) throws GPUdbException
Used to change the name of a SQL-style schema, specified inschemaName.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterSchema
public AlterSchemaResponse alterSchema(String schemaName, String action, String value, Map<String,String> options) throws GPUdbException
Used to change the name of a SQL-style schema, specified inschemaName.- Parameters:
schemaName- Name of the schema to be altered.action- Modification operation to be applied. Supported values:ADD_COMMENT: Adds a comment describing the schemaRENAME_SCHEMA: Renames a schema tovalue. Has the same naming restrictions as tables.
value- The value of the modification, depending onaction. For now the only value ofactionisRENAME_SCHEMA. In this case the value is the new name of the schema.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterSystemProperties
public AlterSystemPropertiesResponse alterSystemProperties(AlterSystemPropertiesRequest request) throws GPUdbException
ThealterSystemPropertiesendpoint is primarily used to simplify the testing of the system and is not expected to be used during normal execution. Commands are given through thepropertyUpdatesMapwhose keys are commands and values are strings representing integer values (for example '8000') or boolean values ('true' or 'false').- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterSystemProperties
public AlterSystemPropertiesResponse alterSystemProperties(Map<String,String> propertyUpdatesMap, Map<String,String> options) throws GPUdbException
ThealterSystemPropertiesendpoint is primarily used to simplify the testing of the system and is not expected to be used during normal execution. Commands are given through thepropertyUpdatesMapwhose keys are commands and values are strings representing integer values (for example '8000') or boolean values ('true' or 'false').- Parameters:
propertyUpdatesMap- Map containing the properties of the system to be updated. Error if empty.CONCURRENT_KERNEL_EXECUTION: Enables concurrent kernel execution if the value isTRUEand disables it if the value isFALSE. Supported values:SUBTASK_CONCURRENCY_LIMIT: Sets the maximum number of simultaneous threads allocated to a given request, on each rank. Note that thread allocation may also be limited by resource group limits and/or system load.CHUNK_SIZE: Sets the number of records per chunk to be used for all new tables.CHUNK_COLUMN_MAX_MEMORY: Sets the target maximum data size for each column in a chunk to be used for all new tables.CHUNK_MAX_MEMORY: Indicates the target maximum data size for all columns in a chunk to be used for all new tables.EXECUTION_MODE: Sets the execution_mode for kernel executions to the specified string value. Possible values are host, device, default (engine decides) or an integer value that indicates max chunk size to exec on hostEXTERNAL_FILES_DIRECTORY: Sets the root directory path where external table data files are accessed from. Path must exist on the head nodeREQUEST_TIMEOUT: Number of minutes after which filtering (e.g.,filter) and aggregating (e.g.,aggregateGroupBy) queries will timeout. The default value is '20'. The minimum allowed value is '0'. The maximum allowed value is '1440'.MAX_GET_RECORDS_SIZE: The maximum number of records the database will serve for a given data retrieval call. The default value is '20000'. The minimum allowed value is '0'. The maximum allowed value is '1000000'.ENABLE_AUDIT: Enable or disable auditing.AUDIT_HEADERS: Enable or disable auditing of request headers.AUDIT_BODY: Enable or disable auditing of request bodies.AUDIT_DATA: Enable or disable auditing of request data.AUDIT_RESPONSE: Enable or disable auditing of response information.SHADOW_AGG_SIZE: Size of the shadow aggregate chunk cache in bytes. The default value is '10000000'. The minimum allowed value is '0'. The maximum allowed value is '2147483647'.SHADOW_FILTER_SIZE: Size of the shadow filter chunk cache in bytes. The default value is '10000000'. The minimum allowed value is '0'. The maximum allowed value is '2147483647'.ENABLE_OVERLAPPED_EQUI_JOIN: Enable overlapped-equi-join filter. The default value is 'true'.ENABLE_ONE_STEP_COMPOUND_EQUI_JOIN: Enable the one_step compound-equi-join algorithm. The default value is 'true'.KAFKA_BATCH_SIZE: Maximum number of records to be ingested in a single batch. The default value is '1000'. The minimum allowed value is '1'. The maximum allowed value is '10000000'.KAFKA_POLL_TIMEOUT: Maximum time (milliseconds) for each poll to get records from kafka. The default value is '0'. The minimum allowed value is '0'. The maximum allowed value is '1000'.KAFKA_WAIT_TIME: Maximum time (seconds) to buffer records received from kafka before ingestion. The default value is '30'. The minimum allowed value is '1'. The maximum allowed value is '120'.EGRESS_PARQUET_COMPRESSION: Parquet file compression type. Supported values: The default value isSNAPPY.EGRESS_SINGLE_FILE_MAX_SIZE: Max file size (in MB) to allow saving to a single file. May be overridden by target limitations. The default value is '10000'. The minimum allowed value is '1'. The maximum allowed value is '200000'.MAX_CONCURRENT_KERNELS: Sets the max_concurrent_kernels value of the conf. The minimum allowed value is '0'. The maximum allowed value is '256'.SYSTEM_METADATA_RETENTION_PERIOD: Sets the system_metadata.retention_period value of the conf. The minimum allowed value is '1'.TCS_PER_TOM: Size of the worker rank data calculation thread pool. This is primarily used for computation-based operations such as aggregates and record retrieval. The minimum allowed value is '2'. The maximum allowed value is '8192'.TPS_PER_TOM: Size of the worker rank data processing thread pool. This includes operations such as inserts, updates, and deletes on table data. Multi-head inserts are not affected by this limit. The minimum allowed value is '2'. The maximum allowed value is '8192'.BACKGROUND_WORKER_THREADS: Size of the worker rank background thread pool. This includes background operations such as watermark evictions catalog table updates. The minimum allowed value is '1'. The maximum allowed value is '8192'.LOG_DEBUG_JOB_INFO: Outputs various job-related information to the rank logs. Used for troubleshooting.ENABLE_THREAD_HANG_LOGGING: Log a stack trace for any thread that runs longer than a defined threshold. Used for troubleshooting. The default value is 'true'.AI_ENABLE_RAG: Enable RAG. The default value is 'false'.AI_API_PROVIDER: AI API provider typeAI_API_URL: AI API URLAI_API_KEY: AI API keyAI_API_CONNECTION_TIMEOUT: AI API connection timeout in secondsAI_API_EMBEDDINGS_MODEL: AI API model nameTELM_PERSIST_QUERY_METRICS: Enable or disable persisting of query metrics.POSTGRES_PROXY_IDLE_CONNECTION_TIMEOUT: Idle connection timeout in secondsPOSTGRES_PROXY_KEEP_ALIVE: Enable postgres proxy keep alive. The default value is 'false'.KIFS_DIRECTORY_DATA_LIMIT: The default maximum capacity to apply when creating a KiFS directory (bytes). The minimum allowed value is '-1'.COMPRESSION_CODEC: The default compression algorithm applied to any column without a column-level or table-level default compression specified at the time it was createdDISK_AUTO_OPTIMIZE_TIMEOUT: Time interval in seconds after which the database will apply optimizations/transformations to persisted data, such as compression. The minimum allowed value is '0'.HA_CONSUMER_REPLAY_OFFSET: Initializes HA replay from the given timestamp (as milliseconds since unix epoch). The minimum allowed value is '-1'.
options- Optional parameters.EVICT_TO_COLD: IfTRUEand evict_columns is specified, the given objects will be evicted to cold storage (if such a tier exists). Supported values:PERSIST: IfTRUEthe system configuration will be written to disk upon successful application of this request. This will commit the changes from this request and any additional in-memory modifications. Supported values: The default value isTRUE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterTable
public AlterTableResponse alterTable(AlterTableRequest request) throws GPUdbException
Apply various modifications to a table or view. The available modifications include the following:Manage a table's columns--a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.
External tables cannot be modified except for their refresh method.
Create or delete a column, low-cardinality index, chunk skip, geospatial, CAGRA, or HNSW index. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Create or delete a foreign key on a particular column.
Manage a range-partitioned or a manual list-partitioned table's partitions.
Set (or reset) the tier strategy of a table or view.
Refresh and manage the refresh mode of a materialized view or an external table.
Set the time-to-live (TTL). This can be applied to tables or views.
Set the global access mode (i.e. locking) for a table. This setting trumps any role-based access controls that may be in place; e.g., a user with write access to a table marked read-only will not be able to insert records into it. The mode can be set to read-only, write-only, read/write, and no access.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterTable
public AlterTableResponse alterTable(String tableName, String action, String value, Map<String,String> options) throws GPUdbException
Apply various modifications to a table or view. The available modifications include the following:Manage a table's columns--a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.
External tables cannot be modified except for their refresh method.
Create or delete a column, low-cardinality index, chunk skip, geospatial, CAGRA, or HNSW index. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Create or delete a foreign key on a particular column.
Manage a range-partitioned or a manual list-partitioned table's partitions.
Set (or reset) the tier strategy of a table or view.
Refresh and manage the refresh mode of a materialized view or an external table.
Set the time-to-live (TTL). This can be applied to tables or views.
Set the global access mode (i.e. locking) for a table. This setting trumps any role-based access controls that may be in place; e.g., a user with write access to a table marked read-only will not be able to insert records into it. The mode can be set to read-only, write-only, read/write, and no access.
- Parameters:
tableName- Table on which the operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table or view.action- Modification operation to be applied. Supported values:CREATE_INDEX: Creates a column (attribute) index, low-cardinality index, chunk skip index, geospatial index, CAGRA index, or HNSW index (depending on the specifiedINDEX_TYPE), on the column name specified invalue. If this column already has the specified index, an error will be returned.REFRESH_INDEX: Refreshes an index identified byINDEX_TYPE, on the column name specified invalue. Currently applicable only to CAGRA indices.DELETE_INDEX: Deletes a column (attribute) index, low-cardinality index, chunk skip index, geospatial index, CAGRA index, or HNSW index (depending on the specifiedINDEX_TYPE), on the column name specified invalue. If this column does not have the specified index, an error will be returned.MOVE_TO_COLLECTION: [DEPRECATED--please useMOVE_TO_SCHEMAand usecreateSchemato create the schema if non-existent] Moves a table or view into a schema namedvalue. If the schema provided is non-existent, it will be automatically created.MOVE_TO_SCHEMA: Moves a table or view into a schema namedvalue. If the schema provided is nonexistent, an error will be thrown. Ifvalueis empty, then the table or view will be placed in the user's default schema.PROTECTED: No longer used. Previously set whether the giventableNameshould be protected or not. Thevaluewould have been either 'true' or 'false'.RENAME_TABLE: Renames a table or view tovalue. Has the same naming restrictions as tables.TTL: Sets the time-to-live in minutes of the table or view specified intableName.ADD_COMMENT: Adds the comment specified invalueto the table specified intableName. UseCOLUMN_NAMEto set the comment for a column.ADD_COLUMN: Adds the column specified invalueto the table specified intableName. UseCOLUMN_TYPEandCOLUMN_PROPERTIESinoptionsto set the column's type and properties, respectively.CHANGE_COLUMN: Changes type and properties of the column specified invalue. UseCOLUMN_TYPEandCOLUMN_PROPERTIESinoptionsto set the column's type and properties, respectively. Note that primary key and/or shard key columns cannot be changed. All unchanging column properties must be listed for the change to take place, e.g., to add dictionary encoding to an existing 'char4' column, both 'char4' and 'dict' must be specified in theoptionsmap.DELETE_COLUMN: Deletes the column specified invaluefrom the table specified intableName.CREATE_FOREIGN_KEY: Creates a foreign key specified invalueusing the format '(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]'.DELETE_FOREIGN_KEY: Deletes a foreign key. Thevalueshould be the foreign_key_name specified when creating the key or the complete string used to define it.ADD_PARTITION: Adds the partition specified invalue, to either a range-partitioned or manual list-partitioned table.REMOVE_PARTITION: Removes the partition specified invalue(and relocates all of its data to the default partition) from either a range-partitioned or manual list-partitioned table.DELETE_PARTITION: Deletes the partition specified invalue(and all of its data) from either a range-partitioned or manual list-partitioned table.SET_GLOBAL_ACCESS_MODE: Sets the global access mode (i.e. locking) for the table specified intableName. Specify the access mode invalue. Valid modes are 'no_access', 'read_only', 'write_only' and 'read_write'.REFRESH: For a materialized view, replays all the table creation commands required to create the view. For an external table, reloads all data in the table from its associated source files or data source.SET_REFRESH_METHOD: For a materialized view, sets the method by which the view is refreshed to the method specified invalue- one of 'manual', 'periodic', or 'on_change'. For an external table, sets the method by which the table is refreshed to the method specified invalue- either 'manual' or 'on_start'.SET_REFRESH_START_TIME: Sets the time to start periodic refreshes of this materialized view to the datetime string specified invaluewith format 'YYYY-MM-DD HH:MM:SS'. Subsequent refreshes occur at the specified time + N * the refresh period.SET_REFRESH_STOP_TIME: Sets the time to stop periodic refreshes of this materialized view to the datetime string specified invaluewith format 'YYYY-MM-DD HH:MM:SS'.SET_REFRESH_PERIOD: Sets the time interval in seconds at which to refresh this materialized view to the value specified invalue. Also, sets the refresh method to periodic if not already set.SET_REFRESH_SPAN: Sets the future time-offset(in seconds) for the view refresh to stop.SET_REFRESH_EXECUTE_AS: Sets the user name to refresh this materialized view to the value specified invalue.REMOVE_TEXT_SEARCH_ATTRIBUTES: Removes text search attribute from all columns.REMOVE_SHARD_KEYS: Removes the shard key property from all columns, so that the table will be considered randomly sharded. The data is not moved. Thevalueis ignored.SET_STRATEGY_DEFINITION: Sets the tier strategy for the table and its columns to the one specified invalue, replacing the existing tier strategy in its entirety.CANCEL_DATASOURCE_SUBSCRIPTION: Permanently unsubscribe a data source that is loading continuously as a stream. The data source can be Kafka / S3 / Azure.DROP_DATASOURCE_SUBSCRIPTION: Permanently delete a cancelled data source subscription.PAUSE_DATASOURCE_SUBSCRIPTION: Temporarily unsubscribe a data source that is loading continuously as a stream. The data source can be Kafka / S3 / Azure.RESUME_DATASOURCE_SUBSCRIPTION: Resubscribe to a paused data source subscription. The data source can be Kafka / S3 / Azure.CHANGE_OWNER: Change the owner resource group of the table.SET_LOAD_VECTORS_POLICY: Set startup data loading scheme for the table; see description of 'load_vectors_policy' increateTablefor possible values forvalueSET_BUILD_PK_INDEX_POLICY: Set startup primary key generation scheme for the table; see description of 'build_pk_index_policy' increateTablefor possible values forvalueSET_BUILD_MATERIALIZED_VIEW_POLICY: Set startup rebuilding scheme for the materialized view; see description of 'build_materialized_view_policy' increateMaterializedViewfor possible values forvalue
value- The value of the modification, depending onaction. For example, ifactionisADD_COLUMN, this would be the column name; while the column's definition would be covered by theCOLUMN_TYPE,COLUMN_PROPERTIES,COLUMN_DEFAULT_VALUE, andADD_COLUMN_EXPRESSIONinoptions. IfactionisTTL, it would be the number of minutes for the new TTL. IfactionisREFRESH, this field would be blank.options- Optional parameters.ACTIONCOLUMN_NAMETABLE_NAMECOLUMN_DEFAULT_VALUE: When adding a column, set a default value for existing records. For nullable columns, the default value will be null, regardless of data type.COLUMN_PROPERTIES: When adding or changing a column, set the column properties (strings, separated by a comma: data, text_search, char8, int8 etc).COLUMN_TYPE: When adding or changing a column, set the column type (strings, separated by a comma: int, double, string, null etc).COPY_VALUES_FROM_COLUMN: [DEPRECATED--please useADD_COLUMN_EXPRESSIONinstead.]RENAME_COLUMN: When changing a column, specify new column name.VALIDATE_CHANGE_COLUMN: When changing a column, validate the change before applying it (or not). Supported values:TRUE: Validate all values. A value too large (or too long) for the new type will prevent any change.FALSE: When a value is too large or long, it will be truncated.
TRUE.UPDATE_LAST_ACCESS_TIME: Indicates whether the time-to-live (TTL) expiration countdown timer should be reset to the table's TTL. Supported values:TRUE: Reset the expiration countdown timer to the table's configured TTL.FALSE: Don't reset the timer; expiration countdown will continue from where it is, as if the table had not been accessed.
TRUE.ADD_COLUMN_EXPRESSION: When adding a column, an optional expression to use for the new column's values. Any valid expression may be used, including one containing references to existing columns in the same table.STRATEGY_DEFINITION: Optional parameter for specifying the tier strategy for the table and its columns whenactionisSET_STRATEGY_DEFINITION, replacing the existing tier strategy in its entirety.INDEX_TYPE: Type of index to create, whenactionisCREATE_INDEX; to refresh, whenactionisREFRESH_INDEX; or to delete, whenactionisDELETE_INDEX. Supported values:COLUMN: Create or delete a column (attribute) index.LOW_CARDINALITY: Create a low-cardinality column (attribute) index.CHUNK_SKIP: Create or delete a chunk skip index.GEOSPATIAL: Create or delete a geospatial indexCAGRA: Create or delete a CAGRA index on a vector columnHNSW: Create or delete an HNSW index on a vector column
COLUMN.INDEX_OPTIONS: Options to use when creating an index, in the format "key: value [, key: value [, ...]]". Valid options vary by index type.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterTableColumns
public AlterTableColumnsResponse alterTableColumns(AlterTableColumnsRequest request) throws GPUdbException
Apply various modifications to columns in a table, view. The available modifications include the following:Create or delete an index on a particular column. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Manage a table's columns--a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterTableColumns
public AlterTableColumnsResponse alterTableColumns(String tableName, List<Map<String,String>> columnAlterations, Map<String,String> options) throws GPUdbException
Apply various modifications to columns in a table, view. The available modifications include the following:Create or delete an index on a particular column. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Manage a table's columns--a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.
- Parameters:
tableName- Table on which the operation will be performed. Must be an existing table or view, in [schema_name.]table_name format, using standard name resolution rules.columnAlterations- List of alter table add/delete/change column requests - all for the same table. Each request is a map that includes 'column_name', 'action' and the options specific for the action. Note that the same options as in alter table requests but in the same map as the column name and the action. For example: [{'column_name':'col_1','action':'change_column','rename_column':'col_2'},{'column_name':'col_1','action':'add_column', 'type':'int','default_value':'1'}]options- Optional parameters.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterTableMetadata
public AlterTableMetadataResponse alterTableMetadata(AlterTableMetadataRequest request) throws GPUdbException
Updates (adds or changes) metadata for tables. The metadata key and values must both be strings. This is an easy way to annotate whole tables rather than single records within tables. Some examples of metadata are owner of the table, table creation timestamp etc.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterTableMetadata
public AlterTableMetadataResponse alterTableMetadata(List<String> tableNames, Map<String,String> metadataMap, Map<String,String> options) throws GPUdbException
Updates (adds or changes) metadata for tables. The metadata key and values must both be strings. This is an easy way to annotate whole tables rather than single records within tables. Some examples of metadata are owner of the table, table creation timestamp etc.- Parameters:
tableNames- Names of the tables whose metadata will be updated, in [schema_name.]table_name format, using standard name resolution rules. All specified tables must exist, or an error will be returned.metadataMap- A map which contains the metadata of the tables that are to be updated. Note that only one map is provided for all the tables; so the change will be applied to every table. If the provided map is empty, then all existing metadata for the table(s) will be cleared.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterTableMonitor
public AlterTableMonitorResponse alterTableMonitor(AlterTableMonitorRequest request) throws GPUdbException
Alters a table monitor previously created withcreateTableMonitor.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterTableMonitor
public AlterTableMonitorResponse alterTableMonitor(String topicId, Map<String,String> monitorUpdatesMap, Map<String,String> options) throws GPUdbException
Alters a table monitor previously created withcreateTableMonitor.- Parameters:
topicId- The topic ID returned bycreateTableMonitor.monitorUpdatesMap- Map containing the properties of the table monitor to be updated. Error if empty.SCHEMA_NAME: Updates the schema name. IfSCHEMA_NAMEdoesn't exist, an error will be thrown. IfSCHEMA_NAMEis empty, then the user's default schema will be used.
options- Optional parameters.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterTier
public AlterTierResponse alterTier(AlterTierRequest request) throws GPUdbException
Alters properties of an existing tier to facilitate resource management.To disable watermark-based eviction, set both
HIGH_WATERMARKandLOW_WATERMARKto 100.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterTier
public AlterTierResponse alterTier(String name, Map<String,String> options) throws GPUdbException
Alters properties of an existing tier to facilitate resource management.To disable watermark-based eviction, set both
HIGH_WATERMARKandLOW_WATERMARKto 100.- Parameters:
name- Name of the tier to be altered. Must be an existing tier group name: vram, ram, disk[n], persist, cold[n].options- Optional parameters.CAPACITY: Maximum size in bytes this tier may hold at once, per rank.HIGH_WATERMARK: Threshold of usage of this tier's resource that once exceeded, will trigger watermark-based eviction from this tier. The minimum allowed value is '0'. The maximum allowed value is '100'.LOW_WATERMARK: Threshold of resource usage that once fallen below after crossing theHIGH_WATERMARK, will cease watermark-based eviction from this tier. The minimum allowed value is '0'. The maximum allowed value is '100'.WAIT_TIMEOUT: Timeout in seconds for reading from or writing to this resource. Applies to cold storage tiers only.PERSIST: IfTRUEthe system configuration will be written to disk upon successful application of this request. This will commit the changes from this request and any additional in-memory modifications. Supported values: The default value isTRUE.RANK: Apply the requested change only to a specific rank. The minimum allowed value is '0'. The maximum allowed value is '10000'.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterUser
public AlterUserResponse alterUser(AlterUserRequest request) throws GPUdbException
Alters a user.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterUser
public AlterUserResponse alterUser(String name, String action, String value, Map<String,String> options) throws GPUdbException
Alters a user.- Parameters:
name- Name of the user to be altered. Must be an existing user.action- Modification operation to be applied to the user. Supported values:SET_ACTIVATED: Is the user allowed to login.TRUE: User may loginFALSE: User may not loginSET_COMMENT: Sets the comment for an internal user.SET_DEFAULT_SCHEMA: Set the default_schema for an internal user. An empty string means the user will have no default schema.SET_PASSWORD: Sets the password of the user. The user must be an internal user.SET_RESOURCE_GROUP: Sets the resource group for an internal user. The resource group must exist, otherwise, an empty string assigns the user to the default resource group.
value- The value of the modification, depending onaction.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterVideo
public AlterVideoResponse alterVideo(AlterVideoRequest request) throws GPUdbException
Alters a video.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterVideo
public AlterVideoResponse alterVideo(String path, Map<String,String> options) throws GPUdbException
Alters a video.- Parameters:
path- Fully-qualified KiFS path to the video to be altered.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterWal
public AlterWalResponse alterWal(AlterWalRequest request) throws GPUdbException
Alters table write-ahead log (WAL) settings. Returns information about the requested table WAL modifications.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
alterWal
public AlterWalResponse alterWal(List<String> tableNames, Map<String,String> options) throws GPUdbException
Alters table write-ahead log (WAL) settings. Returns information about the requested table WAL modifications.- Parameters:
tableNames- List of tables to modify. An asterisk changes the system settings.options- Optional parameters.MAX_SEGMENT_SIZE: Maximum size of an individual segment fileSEGMENT_COUNT: Approximate number of segment files to split the WAL across. Must be at least two.SYNC_POLICY: Maximum size of an individual segment file. Supported values:NONE: Disables the WALBACKGROUND: WAL entries are periodically written instead of immediately after each operationFLUSH: Protects entries in the event of a database crashFSYNC: Protects entries in the event of an OS crash
FLUSH_FREQUENCY: Specifies how frequently WAL entries are written with background sync. This is a global setting and can only be used with the system {options.table_names} specifier '*'.CHECKSUM: IfTRUEeach entry will be checked against a protective checksum. Supported values: The default value isTRUE.OVERRIDE_NON_DEFAULT: IfTRUEtables with unique WAL settings will be overridden when applying a system level change. Supported values: The default value isFALSE.RESTORE_SYSTEM_SETTINGS: IfTRUEtables with unique WAL settings will be reverted to the current global settings. Cannot be used in conjunction with any other option. Supported values: The default value isFALSE.PERSIST: IfTRUEand a system-level change was requested, the system configuration will be written to disk upon successful application of this request. This will commit the changes from this request and any additional in-memory modifications. Supported values: The default value isTRUE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
appendRecords
public AppendRecordsResponse appendRecords(AppendRecordsRequest request) throws GPUdbException
Append (or insert) all records from a source table (specified bysourceTableName) to a particular target table (specified bytableName). The field map (specified byfieldMap) holds the user specified map of target table column names with their mapped source column names.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
appendRecords
public AppendRecordsResponse appendRecords(String tableName, String sourceTableName, Map<String,String> fieldMap, Map<String,String> options) throws GPUdbException
Append (or insert) all records from a source table (specified bysourceTableName) to a particular target table (specified bytableName). The field map (specified byfieldMap) holds the user specified map of target table column names with their mapped source column names.- Parameters:
tableName- The table name for the records to be appended, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.sourceTableName- The source table name to get records from, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table name.fieldMap- Contains the mapping of column names from the target table (specified bytableName) as the keys, and corresponding column names or expressions (e.g., 'col_name+1') from the source table (specified bysourceTableName). Must be existing column names in source table and target table, and their types must be matched. For details on using expressions, see Expressions.options- Optional parameters.OFFSET: A positive integer indicating the number of initial results to skip fromsourceTableName. Default is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT. The default value is '0'.LIMIT: A positive integer indicating the maximum number of results to be returned fromsourceTableName. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The default value is '-9999'.EXPRESSION: Optional filter expression to apply to thesourceTableName. The default value is ''.ORDER_BY: Comma-separated list of the columns to be sorted by from source table (specified bysourceTableName), e.g., 'timestamp asc, x desc'. TheORDER_BYcolumns do not have to be present infieldMap. The default value is ''.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting source table records (specified bysourceTableName) into a target table (specified bytableName) with a primary key. If set toTRUE, any existing table record with primary key values that match those of a source table record being inserted will be replaced by that new record (the new data will be "upserted"). If set toFALSE, any existing table record with primary key values that match those of a source table record being inserted will remain unchanged, while the source record will be rejected and an error handled as determined byIGNORE_EXISTING_PK. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting source table records (specified bysourceTableName) into a target table (specified bytableName) with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any source table record being inserted that is rejected for having primary key values that match those of an existing target table record will be ignored with no error generated. IfFALSE, the rejection of any source table record for having primary key values matching an existing target table record will result in an error being raised. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore source table records whose primary key values collide with those of target table recordsFALSE: Raise an error for any source table record whose primary key values collide with those of a target table record
FALSE.PK_CONFLICT_PREDICATE_HIGHER: The record with higher value for the column resolves the primary-key insert conflict. The default value is ''.PK_CONFLICT_PREDICATE_LOWER: The record with lower value for the column resolves the primary-key insert conflict. The default value is ''.TRUNCATE_STRINGS: If set toTRUE, it allows inserting longer strings into smaller charN string columns by truncating the longer strings to fit. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
clearStatistics
public ClearStatisticsResponse clearStatistics(ClearStatisticsRequest request) throws GPUdbException
Clears statistics (cardinality, mean value, etc.) for a column in a specified table.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
clearStatistics
public ClearStatisticsResponse clearStatistics(String tableName, String columnName, Map<String,String> options) throws GPUdbException
Clears statistics (cardinality, mean value, etc.) for a column in a specified table.- Parameters:
tableName- Name of a table, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table. The default value is ''.columnName- Name of the column intableNamefor which to clear statistics. The column must be from an existing table. An empty string clears statistics for all columns in the table. The default value is ''.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
clearTable
public ClearTableResponse clearTable(ClearTableRequest request) throws GPUdbException
Clears (drops) one or all tables in the database cluster. The operation is synchronous meaning that the table will be cleared before the function returns. The response payload returns the status of the operation along with the name of the table that was cleared.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
clearTable
public ClearTableResponse clearTable(String tableName, String authorization, Map<String,String> options) throws GPUdbException
Clears (drops) one or all tables in the database cluster. The operation is synchronous meaning that the table will be cleared before the function returns. The response payload returns the status of the operation along with the name of the table that was cleared.- Parameters:
tableName- Name of the table to be cleared, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table. Empty string clears all available tables, though this behavior is be prevented by default via gpudb.conf parameter 'disable_clear_all'. The default value is ''.authorization- No longer used. User can pass an empty string. The default value is ''.options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfTRUEand if the table specified intableNamedoes not exist no error is returned. IfFALSEand if the table specified intableNamedoes not exist then an error is returned. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
clearTableMonitor
public ClearTableMonitorResponse clearTableMonitor(ClearTableMonitorRequest request) throws GPUdbException
Deactivates a table monitor previously created withcreateTableMonitor.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
clearTableMonitor
public ClearTableMonitorResponse clearTableMonitor(String topicId, Map<String,String> options) throws GPUdbException
Deactivates a table monitor previously created withcreateTableMonitor.- Parameters:
topicId- The topic ID returned bycreateTableMonitor.options- Optional parameters.KEEP_AUTOGENERATED_SINK: IfTRUE, the auto-generated datasink associated with this monitor, if there is one, will be retained for further use. IfFALSE, then the auto-generated sink will be dropped if there are no other monitors referencing it. Supported values: The default value isFALSE.CLEAR_ALL_REFERENCES: IfTRUE, all references that share the sametopicIdwill be cleared. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
clearTables
public ClearTablesResponse clearTables(ClearTablesRequest request) throws GPUdbException
Clears (drops) tables in the database cluster. The operation is synchronous meaning that the tables will be cleared before the function returns. The response payload returns the status of the operation for each table requested.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
clearTables
public ClearTablesResponse clearTables(List<String> tableNames, Map<String,String> options) throws GPUdbException
Clears (drops) tables in the database cluster. The operation is synchronous meaning that the tables will be cleared before the function returns. The response payload returns the status of the operation for each table requested.- Parameters:
tableNames- Names of the tables to be cleared, in [schema_name.]table_name format, using standard name resolution rules. Must be existing tables. Empty list clears all available tables, though this behavior is be prevented by default via gpudb.conf parameter 'disable_clear_all'. The default value is an emptyList.options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfTRUEand if a table specified intableNamesdoes not exist no error is returned. IfFALSEand if a table specified intableNamesdoes not exist then an error is returned. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
clearTrigger
public ClearTriggerResponse clearTrigger(ClearTriggerRequest request) throws GPUdbException
Clears or cancels the trigger identified by the specified handle. The output returns the handle of the trigger cleared as well as indicating success or failure of the trigger deactivation.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
clearTrigger
public ClearTriggerResponse clearTrigger(String triggerId, Map<String,String> options) throws GPUdbException
Clears or cancels the trigger identified by the specified handle. The output returns the handle of the trigger cleared as well as indicating success or failure of the trigger deactivation.- Parameters:
triggerId- ID for the trigger to be deactivated.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
collectStatistics
public CollectStatisticsResponse collectStatistics(CollectStatisticsRequest request) throws GPUdbException
Collect statistics for a column(s) in a specified table.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
collectStatistics
public CollectStatisticsResponse collectStatistics(String tableName, List<String> columnNames, Map<String,String> options) throws GPUdbException
Collect statistics for a column(s) in a specified table.- Parameters:
tableName- Name of a table, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.columnNames- List of one or more column names intableNamefor which to collect statistics (cardinality, mean value, etc.).options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createBackup
public CreateBackupResponse createBackup(CreateBackupRequest request) throws GPUdbException
Creates a database backup, containing a snapshot of existing objects, at the remote file store accessible via the data sink specified bydatasinkName.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createBackup
public CreateBackupResponse createBackup(String backupName, String backupType, Map<String,String> backupObjectsMap, String datasinkName, Map<String,String> options) throws GPUdbException
Creates a database backup, containing a snapshot of existing objects, at the remote file store accessible via the data sink specified bydatasinkName.- Parameters:
backupName- Name for this backup. If the backup already exists, only an incremental or differential backup can be made, unlessRECREATEis set toTRUE.backupType- Type of snapshot to create. Supported values:INCREMENTAL: Snapshot of changes in the database objects and data since the last snapshot of any kind.DIFFERENTIAL: Snapshot of changes in the database objects and data since the last full snapshot.FULL: Snapshot of the given database objects and data.
backupObjectsMap- Map of objects to be captured in the backup; must be specified when creating a full snapshot and left unspecified when creating an incremental or differential snapshot.ALL: All object types and data contained in the given schemas(s).TABLE: Tables(s) and SQL view(s).CATALOG: CatalogCREDENTIAL: Credential(s).CONTEXT: Context(s).DATASINK: Data sink(s).DATASOURCE: Data source(s).STORED_PROCEDURE: SQL procedure(s).MONITOR: Table monitor(s) / SQL stream(s).USER: User(s) (internal and external) and associated permissions.ROLE: Role(s), role members (roles or users, recursively), and associated permissions.CONFIGURATION: IfTRUE, backup the database configuration file. Supported values: The default value isFALSE.
datasinkName- Data sink through which the backup will be stored.options- Optional parameters.COMMENT: Comments to store with this backup.CHECKSUM: Whether or not to calculate checksums for backup files. Supported values: The default value isFALSE.DDL_ONLY: Whether or not, for tables, to only backup DDL and not table data. Supported values: The default value isFALSE.MAX_INCREMENTAL_BACKUPS_TO_KEEP: Maximum number of incremental snapshots to keep. The default value is '-1'.DELETE_INTERMEDIATE_BACKUPS: Whether or not to delete any intermediate snapshots when thebackupTypeis set toDIFFERENTIAL. Supported values: The default value isFALSE.RECREATE: Whether or not to replace an existing backup object with a new backup with a full snapshot, if one already exists. Supported values: The default value isFALSE.DRY_RUN: Whether or not to perform a dry run of a backup operation. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createCatalog
public CreateCatalogResponse createCatalog(CreateCatalogRequest request) throws GPUdbException
Creates a catalog, which contains the location and connection information for a deltalake catalog that is external to the database.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createCatalog
public CreateCatalogResponse createCatalog(String name, String tableFormat, String location, String type, String credential, String datasource, Map<String,String> options) throws GPUdbException
Creates a catalog, which contains the location and connection information for a deltalake catalog that is external to the database.- Parameters:
name- Name of the catalog to be created.tableFormat- Table format (iceberg, hudi, deltalake)location- Location of the catalog in 'http[s]://[server[:port]]]' format.type- Type of the catalog (REST (unity, polaris, tabular), nessie, hive, glue)credential- Name of the credential object to be used in catalogdatasource- Password for the remote system user; may be an empty stringoptions- Optional parameters.ACCESS_DELEGATION: Use access delegation for object store. Supported values: The default value isDATASOURCE_CREDENTIALS.SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createContainerRegistry
public CreateContainerRegistryResponse createContainerRegistry(CreateContainerRegistryRequest request) throws GPUdbException
- Throws:
GPUdbException
-
createContainerRegistry
public CreateContainerRegistryResponse createContainerRegistry(String registryName, String uri, String credential, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
createCredential
public CreateCredentialResponse createCredential(CreateCredentialRequest request) throws GPUdbException
Create a new credential.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createCredential
public CreateCredentialResponse createCredential(String credentialName, String type, String identity, String secret, Map<String,String> options) throws GPUdbException
Create a new credential.- Parameters:
credentialName- Name of the credential to be created. Must contain only letters, digits, and underscores, and cannot begin with a digit. Must not match an existing credential name.type- Type of the credential to be created. Supported values:identity- User of the credential to be created.secret- Password of the credential to be created.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createDatasink
public CreateDatasinkResponse createDatasink(CreateDatasinkRequest request) throws GPUdbException
Creates a data sink, which contains the destination information for a data sink that is external to the database.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createDatasink
public CreateDatasinkResponse createDatasink(String name, String destination, Map<String,String> options) throws GPUdbException
Creates a data sink, which contains the destination information for a data sink that is external to the database.- Parameters:
name- Name of the data sink to be created.destination- Destination for the output data in format 'storage_provider_type://path[:port]'. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka', and 's3'.options- Optional parameters.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this data sinkWAIT_TIMEOUT: Timeout in seconds for waiting for a response from this data sinkCREDENTIAL: Name of the credential object to be used in this data sinkS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sinkS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 sink. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataS3_ENCRYPTION_TYPE: Server side encryption typeS3_KMS_KEY_ID: KMS keyHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data sink, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sinkAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sinkAZURE_OAUTH_TOKEN: Oauth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sinkGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sinkGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sinkJDBC_DRIVER_JAR_PATH: JDBC driver jar file locationJDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classKAFKA_TOPIC_NAME: Name of the Kafka topic to publish to ifdestinationis a Kafka brokerMAX_BATCH_SIZE: Maximum number of records per notification message. The default value is '1'.MAX_MESSAGE_SIZE: Maximum size in bytes of each notification message. The default value is '1000000'.JSON_FORMAT: The desired format of JSON encoded notifications message. Supported values: The default value isFLAT.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasink if true, otherwise use http. Supported values: The default value isTRUE.SKIP_VALIDATION: Bypass validation of connection to this data sink. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createDatasource
public CreateDatasourceResponse createDatasource(CreateDatasourceRequest request) throws GPUdbException
Creates a data source, which contains the location and connection information for a data store that is external to the database.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createDatasource
public CreateDatasourceResponse createDatasource(String name, String location, String userName, String password, Map<String,String> options) throws GPUdbException
Creates a data source, which contains the location and connection information for a data store that is external to the database.- Parameters:
name- Name of the data source to be created.location- Location of the remote storage in 'storage_provider_type://[storage_path[:storage_port]]' format. Supported storage provider types are 'azure', 'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.userName- Name of the remote system user; may be an empty stringpassword- Password for the remote system user; may be an empty stringoptions- Optional parameters.SKIP_VALIDATION: Bypass validation of connection to remote source. Supported values: The default value isFALSE.CONNECTION_TIMEOUT: Timeout in seconds for connecting to this storage providerWAIT_TIMEOUT: Timeout in seconds for reading from this storage providerCREDENTIAL: Name of the credential object to be used in data sourceS3_BUCKET_NAME: Name of the Amazon S3 bucket to use as the data sourceS3_REGION: Name of the Amazon S3 region where the given bucket is locatedS3_VERIFY_SSL: Whether to verify SSL connections. Supported values:TRUE: Connect with SSL verificationFALSE: Connect without verifying the SSL connection; for testing purposes, bypassing TLS errors, self-signed certificates, etc.
TRUE.S3_USE_VIRTUAL_ADDRESSING: Whether to use virtual addressing when referencing the Amazon S3 source. Supported values:TRUE: The requests URI should be specified in virtual-hosted-style format where the bucket name is part of the domain name in the URL.FALSE: Use path-style URI for requests.
TRUE.S3_AWS_ROLE_ARN: Amazon IAM Role ARN which has required S3 permissions that can be assumed for the given S3 IAM userS3_ENCRYPTION_CUSTOMER_ALGORITHM: Customer encryption algorithm used encrypting dataS3_ENCRYPTION_CUSTOMER_KEY: Customer encryption key to encrypt or decrypt dataHDFS_KERBEROS_KEYTAB: Kerberos keytab file location for the given HDFS user. This may be a KIFS file.HDFS_DELEGATION_TOKEN: Delegation token for the given HDFS userHDFS_USE_KERBEROS: Use kerberos authentication for the given HDFS cluster. Supported values: The default value isFALSE.AZURE_STORAGE_ACCOUNT_NAME: Name of the Azure storage account to use as the data source, this is valid only if tenant_id is specifiedAZURE_CONTAINER_NAME: Name of the Azure storage container to use as the data sourceAZURE_TENANT_ID: Active Directory tenant ID (or directory ID)AZURE_SAS_TOKEN: Shared access signature token for Azure storage account to use as the data sourceAZURE_OAUTH_TOKEN: OAuth token to access given storage containerGCS_BUCKET_NAME: Name of the Google Cloud Storage bucket to use as the data sourceGCS_PROJECT_ID: Name of the Google Cloud project to use as the data sourceGCS_SERVICE_ACCOUNT_KEYS: Google Cloud service account keys to use for authenticating the data sourceIS_STREAM: To load from Azure/GCS/S3 as a stream continuously. Supported values: The default value isFALSE.KAFKA_TOPIC_NAME: Name of the Kafka topic to use as the data sourceJDBC_DRIVER_JAR_PATH: JDBC driver jar file location. This may be a KIFS file.JDBC_DRIVER_CLASS_NAME: Name of the JDBC driver classANONYMOUS: Use anonymous connection to storage provider--DEPRECATED: this is now the default. Specify use_managed_credentials for non-anonymous connection. Supported values: The default value isTRUE.USE_MANAGED_CREDENTIALS: When no credentials are supplied, we use anonymous access by default. If this is set, we will use cloud provider user settings. Supported values: The default value isFALSE.USE_HTTPS: Use https to connect to datasource if true, otherwise use http. Supported values: The default value isTRUE.SCHEMA_REGISTRY_LOCATION: Location of Confluent Schema Registry in '[storage_path[:storage_port]]' format.SCHEMA_REGISTRY_CREDENTIAL: Confluent Schema Registry credential object name.SCHEMA_REGISTRY_PORT: Confluent Schema Registry port (optional).SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createDeltaTable
public CreateDeltaTableResponse createDeltaTable(CreateDeltaTableRequest request) throws GPUdbException
- Throws:
GPUdbException
-
createDeltaTable
public CreateDeltaTableResponse createDeltaTable(String deltaTableName, String tableName, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
createDirectory
public CreateDirectoryResponse createDirectory(CreateDirectoryRequest request) throws GPUdbException
Creates a new directory in KiFS. The new directory serves as a location in which the user can upload files usinguploadFiles.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createDirectory
public CreateDirectoryResponse createDirectory(String directoryName, Map<String,String> options) throws GPUdbException
Creates a new directory in KiFS. The new directory serves as a location in which the user can upload files usinguploadFiles.- Parameters:
directoryName- Name of the directory in KiFS to be created.options- Optional parameters.CREATE_HOME_DIRECTORY: When set, a home directory is created for the user name provided in the value. ThedirectoryNamemust be an empty string in this case. The user must exist.DATA_LIMIT: The maximum capacity, in bytes, to apply to the created directory. Set to -1 to indicate no upper limit. If empty, the system default limit is applied.NO_ERROR_IF_EXISTS: IfTRUE, does not return an error if the directory already exists. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createEnvironment
public CreateEnvironmentResponse createEnvironment(CreateEnvironmentRequest request) throws GPUdbException
Creates a new environment which can be used by user-defined functions (UDF).- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createEnvironment
public CreateEnvironmentResponse createEnvironment(String environmentName, Map<String,String> options) throws GPUdbException
Creates a new environment which can be used by user-defined functions (UDF).- Parameters:
environmentName- Name of the environment to be created.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createGraph
public CreateGraphResponse createGraph(CreateGraphRequest request) throws GPUdbException
Creates a new graph network using given nodes, edges, weights, and restrictions.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createGraph
public CreateGraphResponse createGraph(String graphName, boolean directedGraph, List<String> nodes, List<String> edges, List<String> weights, List<String> restrictions, Map<String,String> options) throws GPUdbException
Creates a new graph network using given nodes, edges, weights, and restrictions.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.
- Parameters:
graphName- Name of the graph resource to generate.directedGraph- If set toTRUE, the graph will be directed. If set toFALSE, the graph will not be directed. Consult Directed Graphs for more details. Supported values:truefalse
true.nodes- Nodes represent fundamental topological units of a graph. Nodes must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS NODE_ID', expressions, e.g., 'ST_MAKEPOINT(column1, column2) AS NODE_WKTPOINT', or constant values, e.g., '{9, 10, 11} AS NODE_ID'. If using constant values in an identifier combination, the number of values specified must match across the combination.edges- Edges represent the required fundamental topological unit of a graph that typically connect nodes. Edges must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS EDGE_ID', expressions, e.g., 'SUBSTR(column, 1, 6) AS EDGE_NODE1_NAME', or constant values, e.g., "{'family', 'coworker'} AS EDGE_LABEL". If using constant values in an identifier combination, the number of values specified must match across the combination.weights- Weights represent a method of informing the graph solver of the cost of including a given edge in a solution. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS WEIGHTS_EDGE_ID', expressions, e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED', or constant values, e.g., '{4, 15} AS WEIGHTS_VALUESPECIFIED'. If using constant values in an identifier combination, the number of values specified must match across the combination.restrictions- Restrictions represent a method of informing the graph solver which edges and/or nodes should be ignored for the solution. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS RESTRICTIONS_EDGE_ID', expressions, e.g., 'column/2 AS RESTRICTIONS_VALUECOMPARED', or constant values, e.g., '{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'. If using constant values in an identifier combination, the number of values specified must match across the combination.options- Optional parameters.MERGE_TOLERANCE: If node geospatial positions are input (e.g., WKTPOINT, X, Y), determines the minimum separation allowed between unique nodes. If nodes are within the tolerance of each other, they will be merged as a single node. The default value is '1.0E-5'.RECREATE: If set toTRUEand the graph (usinggraphName) already exists, the graph is deleted and recreated. Supported values: The default value isFALSE.SAVE_PERSIST: If set toTRUE, the graph will be saved in the persist directory (see the config reference for more information). If set toFALSE, the graph will be removed when the graph server is shutdown. Supported values: The default value isFALSE.ADD_TABLE_MONITOR: Adds a table monitor to every table used in the creation of the graph; this table monitor will trigger the graph to update dynamically upon inserts to the source table(s). Note that upon database restart, ifSAVE_PERSISTis also set toTRUE, the graph will be fully reconstructed and the table monitors will be reattached. For more details on table monitors, seecreateTableMonitor. Supported values: The default value isFALSE.GRAPH_TABLE: If specified, the created graph is also created as a table with the given name, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. The table will have the following identifier columns: 'EDGE_ID', 'EDGE_NODE1_ID', 'EDGE_NODE2_ID'. If left blank, no table is created. The default value is ''.ADD_TURNS: Adds dummy 'pillowed' edges around intersection nodes where there are more than three edges so that additional weight penalties can be imposed by the solve endpoints. (increases the total number of edges). Supported values: The default value isFALSE.IS_PARTITIONED: Supported values: The default value isFALSE.SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to the server with the most available memory.USE_RTREE: Use an range tree structure to accelerate and improve the accuracy of snapping, especially to edges. Supported values: The default value isTRUE.LABEL_DELIMITER: If provided the label string will be split according to this delimiter and each sub-string will be applied as a separate label onto the specified edge. The default value is ''.ALLOW_MULTIPLE_EDGES: Multigraph choice; allowing multiple edges with the same node pairs if set to true, otherwise, new edges with existing same node pairs will not be inserted. Supported values: The default value isTRUE.EMBEDDING_TABLE: If table exists (should be generated by the match/graph match_embedding solver), the vector embeddings for the newly inserted nodes will be appended into this table. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createJob
public CreateJobResponse createJob(CreateJobRequest request) throws GPUdbException
Create a job which will run asynchronously. The response returns a job ID, which can be used to query the status and result of the job. The status and the result of the job upon completion can be requested bygetJob.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createJob
public CreateJobResponse createJob(String endpoint, String requestEncoding, ByteBuffer data, String dataStr, Map<String,String> options) throws GPUdbException
Create a job which will run asynchronously. The response returns a job ID, which can be used to query the status and result of the job. The status and the result of the job upon completion can be requested bygetJob.- Parameters:
endpoint- Indicates which endpoint to execute, e.g. '/alter/table'.requestEncoding- The encoding of the request payload for the job. Supported values: The default value isBINARY.data- Binary-encoded payload for the job to be run asynchronously. The payload must contain the relevant input parameters for the endpoint indicated inendpoint. Please see the documentation for the appropriate endpoint to see what values must (or can) be specified. If this parameter is used, thenrequestEncodingmust beBINARYorSNAPPY.dataStr- JSON-encoded payload for the job to be run asynchronously. The payload must contain the relevant input parameters for the endpoint indicated inendpoint. Please see the documentation for the appropriate endpoint to see what values must (or can) be specified. If this parameter is used, thenrequestEncodingmust beJSON.options- Optional parameters.JOB_TAG: Tag to use for submitted job. The same tag could be used on backup cluster to retrieve response for the job. Tags can use letter, numbers, '_' and '-'
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createJoinTable
public CreateJoinTableResponse createJoinTable(CreateJoinTableRequest request) throws GPUdbException
Creates a table that is the result of a SQL JOIN.For join details and examples see: Joins. For limitations, see Join Limitations and Cautions.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createJoinTable
public CreateJoinTableResponse createJoinTable(String joinTableName, List<String> tableNames, List<String> columnNames, List<String> expressions, Map<String,String> options) throws GPUdbException
Creates a table that is the result of a SQL JOIN.For join details and examples see: Joins. For limitations, see Join Limitations and Cautions.
- Parameters:
joinTableName- Name of the join table to be created, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.tableNames- The list of table names composing the join, each in [schema_name.]table_name format, using standard name resolution rules. Corresponds to a SQL statement FROM clause.columnNames- List of member table columns or column expressions to be included in the join. Columns can be prefixed with 'table_id.column_name', where 'table_id' is the table name or alias. Columns can be aliased via the syntax 'column_name as alias'. Wild cards '*' can be used to include all columns across member tables or 'table_id.*' for all of a single table's columns. Columns and column expressions composing the join must be uniquely named or aliased--therefore, the '*' wild card cannot be used if column names aren't unique across all tables.expressions- An optional list of expressions to combine and filter the joined tables. Corresponds to a SQL statement WHERE clause. For details see: expressions. The default value is an emptyList.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofjoinTableName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_JOIN_TABLE_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the join as part ofjoinTableNameand usecreateSchemato create the schema if non-existent] Name of a schema for the join. If the schema is non-existent, it will be automatically created. The default value is ''.MAX_QUERY_DIMENSIONS: No longer used.STRATEGY_DEFINITION: The tier strategy for the table and its columns.TTL: Sets the TTL of the join table specified injoinTableName.VIEW_ID: view this projection is part of. The default value is ''.NO_COUNT: Return a count of 0 for the join table for logging and forshowTable; optimization needed for large overlapped equi-join stencils. The default value is 'false'.CHUNK_SIZE: Maximum number of records per joined-chunk for this table. Defaults to the gpudb.conf file chunk sizeENABLE_VIRTUAL_CHUNKING: Collect chunks with accumulated size less than chunk_size into a single chunk. The default value is 'false'.MAX_VIRTUAL_CHUNK_SIZE: Maximum number of records per virtual-chunk. When set, enables virtual chunking. Defaults to chunk_size if virtual chunking otherwise enabled.MIN_VIRTUAL_CHUNK_SIZE: Minimum number of records per virtual-chunk. When set, enables virtual chunking. Defaults to chunk_size if virtual chunking otherwise enabled.ENABLE_SPARSE_VIRTUAL_CHUNKING: materialize virtual chunks with only non-deleted values. The default value is 'false'.ENABLE_EQUI_JOIN_LAZY_RESULT_STORE: Allow using the lazy result store to cache computation of one side of a multichunk equi-join. Reduces computation but also reduces parallelism to the number of chunks on the other side of the equi-joinENABLE_PREDICATE_EQUI_JOIN_LAZY_RESULT_STORE: Allow using the lazy result store to cache computation of one side of a multichunk predicate-equi-join. Reduces computation but also reduces parallelism to the number of chunks on the other side of the equi-joinENABLE_PK_EQUI_JOIN: Use equi-join to do primary key joins rather than using primary-key-index
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createMaterializedView
public CreateMaterializedViewResponse createMaterializedView(CreateMaterializedViewRequest request) throws GPUdbException
Initiates the process of creating a materialized view, reserving the view's name to prevent other views or tables from being created with that name.For materialized view details and examples, see Materialized Views.
The response contains
viewId, which is used to tag each subsequent operation (projection, union, aggregation, filter, or join) that will compose the view.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createMaterializedView
public CreateMaterializedViewResponse createMaterializedView(String tableName, Map<String,String> options) throws GPUdbException
Initiates the process of creating a materialized view, reserving the view's name to prevent other views or tables from being created with that name.For materialized view details and examples, see Materialized Views.
The response contains
viewId, which is used to tag each subsequent operation (projection, union, aggregation, filter, or join) that will compose the view.- Parameters:
tableName- Name of the table to be created that is the top-level table of the materialized view, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.options- Optional parameters.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the materialized view as part oftableNameand usecreateSchemato create the schema if non-existent] Name of a schema which is to contain the newly created view. If the schema provided is non-existent, it will be automatically created.EXECUTE_AS: User name to use to run the refresh jobBUILD_MATERIALIZED_VIEW_POLICY: Sets startup materialized view rebuild scheme. Supported values:ALWAYS: Rebuild as many materialized views as possible before accepting requests.LAZY: Rebuild the necessary materialized views at start, and load the remainder lazily.ON_DEMAND: Rebuild materialized views as requests use them.SYSTEM: Rebuild materialized views using the system-configured default.
SYSTEM.PERSIST: IfTRUE, then the materialized view specified intableNamewill be persisted and will not expire unless aTTLis specified. IfFALSE, then the materialized view will be an in-memory table and will expire unless aTTLis specified otherwise. Supported values: The default value isFALSE.REFRESH_SPAN: Sets the future time-offset(in seconds) at which periodic refresh stopsREFRESH_STOP_TIME: WhenREFRESH_METHODisPERIODIC, specifies the time at which a periodic refresh is stopped. Value is a datetime string with format 'YYYY-MM-DD HH:MM:SS'.REFRESH_METHOD: Method by which the join can be refreshed when the data in underlying member tables have changed. Supported values:MANUAL: Refresh only occurs when manually requested by callingalterTablewith an 'action' of 'refresh'ON_QUERY: Refresh any time the view is queried.ON_CHANGE: If possible, incrementally refresh (refresh just those records added) whenever an insert, update, delete or refresh of input table is done. A full refresh is done if an incremental refresh is not possible.PERIODIC: Refresh table periodically at rate specified byREFRESH_PERIOD
MANUAL.REFRESH_PERIOD: WhenREFRESH_METHODisPERIODIC, specifies the period in seconds at which refresh occursREFRESH_START_TIME: WhenREFRESH_METHODisPERIODIC, specifies the first time at which a refresh is to be done. Value is a datetime string with format 'YYYY-MM-DD HH:MM:SS'.TTL: Sets the TTL of the table specified intableName.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createProc
public CreateProcResponse createProc(CreateProcRequest request) throws GPUdbException
Creates an instance (proc) of the user-defined functions (UDF) specified by the given command, options, and files, and makes it available for execution.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createProc
public CreateProcResponse createProc(String procName, String executionMode, Map<String,ByteBuffer> files, String command, List<String> args, Map<String,String> options) throws GPUdbException
Creates an instance (proc) of the user-defined functions (UDF) specified by the given command, options, and files, and makes it available for execution.- Parameters:
procName- Name of the proc to be created. Must not be the name of a currently existing proc.executionMode- The execution mode of the proc. Supported values:DISTRIBUTED: Input table data will be divided into data segments that are distributed across all nodes in the cluster, and the proc command will be invoked once per data segment in parallel. Output table data from each invocation will be saved to the same node as the corresponding input data.NONDISTRIBUTED: The proc command will be invoked only once per execution, and will not have direct access to any tables named as input or output table parameters in the call toexecuteProc. It will, however, be able to access the database using native API calls.
DISTRIBUTED.files- A map of the files that make up the proc. The keys of the map are file names, and the values are the binary contents of the files. The file names may include subdirectory names (e.g. 'subdir/file') but must not resolve to a directory above the root for the proc. Files may be loaded from existing files in KiFS. Those file names should be prefixed with the uri kifs:// and the values in the map should be empty. The default value is an emptyMap.command- The command (excluding arguments) that will be invoked when the proc is executed. It will be invoked from the directory containing the procfilesand may be any command that can be resolved from that directory. It need not refer to a file actually in that directory; for example, it could be 'java' if the proc is a Java application; however, any necessary external programs must be preinstalled on every database node. If the command refers to a file in that directory, it must be preceded with './' as per Linux convention. If not specified, and exactly one file is provided infiles, that file will be invoked. The default value is ''.args- An array of command-line arguments that will be passed tocommandwhen the proc is executed. The default value is an emptyList.options- Optional parameters.MAX_CONCURRENCY_PER_NODE: The maximum number of concurrent instances of the proc that will be executed per node. 0 allows unlimited concurrency. The default value is '0'.SET_ENVIRONMENT: A python environment to use when executing the proc. Must be an existing environment, else an error will be returned. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createProjection
public CreateProjectionResponse createProjection(CreateProjectionRequest request) throws GPUdbException
Creates a new projection of an existing table. A projection represents a subset of the columns (potentially including derived columns) of a table.For projection details and examples, see Projections. For limitations, see Projection Limitations and Cautions.
Window functions, which can perform operations like moving averages, are available through this endpoint as well as
getRecordsByColumn.A projection can be created with a different shard key than the source table. By specifying
SHARD_KEY, the projection will be sharded according to the specified columns, regardless of how the source table is sharded. The source table can even be unsharded or replicated.If
tableNameis empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createProjection
public CreateProjectionResponse createProjection(String tableName, String projectionName, List<String> columnNames, Map<String,String> options) throws GPUdbException
Creates a new projection of an existing table. A projection represents a subset of the columns (potentially including derived columns) of a table.For projection details and examples, see Projections. For limitations, see Projection Limitations and Cautions.
Window functions, which can perform operations like moving averages, are available through this endpoint as well as
getRecordsByColumn.A projection can be created with a different shard key than the source table. By specifying
SHARD_KEY, the projection will be sharded according to the specified columns, regardless of how the source table is sharded. The source table can even be unsharded or replicated.If
tableNameis empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).- Parameters:
tableName- Name of the existing table on which the projection is to be applied, in [schema_name.]table_name format, using standard name resolution rules. An empty table name creates a projection from a single-row virtual table, where columns specified should be constants or constant expressions.projectionName- Name of the projection to be created, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.columnNames- List of columns fromtableNameto be included in the projection. Can include derived columns. Can be specified as aliased via the syntax 'column_name as alias'.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofprojectionName. IfPERSISTisFALSE(or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_PROJECTION_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the projection as part ofprojectionNameand usecreateSchemato create the schema if non-existent] Name of a schema for the projection. If the schema is non-existent, it will be automatically created. The default value is ''.EXPRESSION: An optional filter expression to be applied to the source table prior to the projection. The default value is ''.IS_REPLICATED: IfTRUEthen the projection will be replicated even if the source table is not. Supported values: The default value isFALSE.OFFSET: The number of initial results to skip (this can be useful for paging through the results). The default value is '0'.LIMIT: The number of records to keep. The default value is '-9999'.ORDER_BY: Comma-separated list of the columns to be sorted by; e.g. 'timestamp asc, x desc'. The columns specified must be present incolumnNames. If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ''.CHUNK_SIZE: Indicates the number of records per chunk to be used for this projection.CHUNK_COLUMN_MAX_MEMORY: Indicates the target maximum data size for each column in a chunk to be used for this projection.CHUNK_MAX_MEMORY: Indicates the target maximum data size for all columns in a chunk to be used for this projection.CREATE_INDEXES: Comma-separated list of columns on which to create indexes on the projection. The columns specified must be present incolumnNames. If any alias is given for any column name, the alias must be used, rather than the original column name.TTL: Sets the TTL of the projection specified inprojectionName.SHARD_KEY: Comma-separated list of the columns to be sharded on; e.g. 'column1, column2'. The columns specified must be present incolumnNames. If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ''.PERSIST: IfTRUE, then the projection specified inprojectionNamewill be persisted and will not expire unless aTTLis specified. IfFALSE, then the projection will be an in-memory table and will expire unless aTTLis specified otherwise. Supported values: The default value isFALSE.PRESERVE_DICT_ENCODING: IfTRUE, then columns that were dict encoded in the source table will be dict encoded in the projection. Supported values: The default value isTRUE.RETAIN_PARTITIONS: Determines whether the created projection will retain the partitioning scheme from the source table. Supported values: The default value isFALSE.PARTITION_TYPE: Partitioning scheme to use. Supported values:RANGE: Use range partitioning.INTERVAL: Use interval partitioning.LIST: Use list partitioning.HASH: Use hash partitioning.SERIES: Use series partitioning.
PARTITION_KEYS: Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined byPARTITION_DEFINITIONS.PARTITION_DEFINITIONS: Comma-separated list of partition definitions, whose format depends on the choice ofPARTITION_TYPE. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example formats.IS_AUTOMATIC_PARTITION: IfTRUE, a new partition will be created for values which don't fall into an existing partition. Currently only supported for list partitions. Supported values: The default value isFALSE.VIEW_ID: ID of view of which this projection is a member. The default value is ''.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for the projection's columns.JOIN_WINDOW_FUNCTIONS: If set, window functions which require a reshard will be computed separately and joined back together, if the width of the projection is greater than the join_window_functions_threshold. The default value is 'true'.JOIN_WINDOW_FUNCTIONS_THRESHOLD: If the projection is greater than this width (in bytes), then window functions which require a reshard will be computed separately and joined back together. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createResourceGroup
public CreateResourceGroupResponse createResourceGroup(CreateResourceGroupRequest request) throws GPUdbException
Creates a new resource group to facilitate resource management.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createResourceGroup
public CreateResourceGroupResponse createResourceGroup(String name, Map<String,Map<String,String>> tierAttributes, String ranking, String adjoiningResourceGroup, Map<String,String> options) throws GPUdbException
Creates a new resource group to facilitate resource management.- Parameters:
name- Name of the group to be created. Must contain only letters, digits, and underscores, and cannot begin with a digit. Must not match existing resource group name.tierAttributes- Optional map containing tier names and their respective attribute group limits. The only valid attribute limit that can be set is max_memory (in bytes) for the VRAM and RAM tiers. For instance, to set max VRAM capacity to 1GB per rank per GPU and max RAM capacity to 10GB per rank, use: {'VRAM':{'max_memory':'1000000000'}, 'RAM':{'max_memory':'10000000000'}}.MAX_MEMORY: Maximum amount of memory usable at one time, per rank, per GPU, for the VRAM tier; or maximum amount of memory usable at one time, per rank, for the RAM tier.
Map.ranking- Indicates the relative ranking among existing resource groups where this new resource group will be placed. Supported values:FIRST: Make this resource group the new first one in the orderingLAST: Make this resource group the new last one in the orderingBEFORE: Place this resource group before the one specified byadjoiningResourceGroupin the orderingAFTER: Place this resource group after the one specified byadjoiningResourceGroupin the ordering
adjoiningResourceGroup- IfrankingisBEFOREorAFTER, this field indicates the resource group before or after which the current group will be placed; otherwise, leave blank. The default value is ''.options- Optional parameters.MAX_CPU_CONCURRENCY: Maximum number of simultaneous threads that will be used to execute a request, per rank, for this group. The minimum allowed value is '4'.MAX_DATA: Maximum amount of data, per rank, in bytes, that can be used by all database objects within this group. Set to -1 to indicate no upper limit. The minimum allowed value is '-1'.MAX_SCHEDULING_PRIORITY: Maximum priority of a scheduled task for this group. The minimum allowed value is '1'. The maximum allowed value is '100'.MAX_TIER_PRIORITY: Maximum priority of a tiered object for this group. The minimum allowed value is '1'. The maximum allowed value is '10'.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createRole
public CreateRoleResponse createRole(CreateRoleRequest request) throws GPUdbException
Creates a new role.Note: This method should be used for on-premise deployments only.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createRole
public CreateRoleResponse createRole(String name, Map<String,String> options) throws GPUdbException
Creates a new role.Note: This method should be used for on-premise deployments only.
- Parameters:
name- Name of the role to be created. Must contain only lowercase letters, digits, and underscores, and cannot begin with a digit. Must not be the same name as an existing user or role.options- Optional parameters.RESOURCE_GROUP: Name of an existing resource group to associate with this user
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createSchema
public CreateSchemaResponse createSchema(CreateSchemaRequest request) throws GPUdbException
Creates a SQL-style schema. Schemas are containers for tables and views. Multiple tables and views can be defined with the same name in different schemas.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createSchema
public CreateSchemaResponse createSchema(String schemaName, Map<String,String> options) throws GPUdbException
Creates a SQL-style schema. Schemas are containers for tables and views. Multiple tables and views can be defined with the same name in different schemas.- Parameters:
schemaName- Name of the schema to be created. Has the same naming restrictions as tables.options- Optional parameters.NO_ERROR_IF_EXISTS: IfTRUE, prevents an error from occurring if the schema already exists. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createStateTable
public CreateStateTableResponse createStateTable(CreateStateTableRequest request) throws GPUdbException
- Throws:
GPUdbException
-
createStateTable
public CreateStateTableResponse createStateTable(String tableName, String inputTableName, String initTableName, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
createTable
public CreateTableResponse createTable(CreateTableRequest request) throws GPUdbException
Creates a new table with the given type (definition of columns). The type is specified intypeIdas either a numerical type ID (as returned bycreateType) or as a list of columns, each specified as a list of the column name, data type, and any column attributes.Example of a type definition with some parameters:
[ ["id", "int8", "primary_key"], ["dept_id", "int8", "primary_key", "shard_key"], ["manager_id", "int8", "nullable"], ["first_name", "char32"], ["last_name", "char64"], ["salary", "decimal"], ["hire_date", "date"] ]Each column definition consists of the column name (which should meet the standard column naming criteria), the column's specific type (int, long, float, double, string, bytes, or any of the properties map values fromcreateType), and any data handling, data key, or data replacement properties.A table may optionally be designated to use a replicated distribution scheme, or be assigned: foreign keys to other tables, a partitioning scheme, and/or a tier strategy.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createTable
public CreateTableResponse createTable(String tableName, String typeId, Map<String,String> options) throws GPUdbException
Creates a new table with the given type (definition of columns). The type is specified intypeIdas either a numerical type ID (as returned bycreateType) or as a list of columns, each specified as a list of the column name, data type, and any column attributes.Example of a type definition with some parameters:
[ ["id", "int8", "primary_key"], ["dept_id", "int8", "primary_key", "shard_key"], ["manager_id", "int8", "nullable"], ["first_name", "char32"], ["last_name", "char64"], ["salary", "decimal"], ["hire_date", "date"] ]Each column definition consists of the column name (which should meet the standard column naming criteria), the column's specific type (int, long, float, double, string, bytes, or any of the properties map values fromcreateType), and any data handling, data key, or data replacement properties.A table may optionally be designated to use a replicated distribution scheme, or be assigned: foreign keys to other tables, a partitioning scheme, and/or a tier strategy.
- Parameters:
tableName- Name of the table to be created, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. Error for requests with existing table of the same name and type ID may be suppressed by using theNO_ERROR_IF_EXISTSoption.typeId- The type for the table, specified as either an existing table's numerical type ID (as returned bycreateType) or a type definition (as described above).options- Optional parameters.NO_ERROR_IF_EXISTS: IfTRUE, prevents an error from occurring if the table already exists and is of the given type. If a table with the same ID but a different type exists, it is still an error. Supported values: The default value isFALSE.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place oftableName. IfIS_RESULT_TABLEisTRUE, then this is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_TABLE_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema as part oftableNameand usecreateSchemato create the schema if non-existent] Name of a schema which is to contain the newly created table. If the schema is non-existent, it will be automatically created.IS_COLLECTION: [DEPRECATED--please usecreateSchemato create a schema instead] Indicates whether to create a schema instead of a table. Supported values: The default value isFALSE.IS_REPLICATED: Affects the distribution scheme for the table's data. IfTRUEand the given type has no explicit shard key defined, the table will be replicated. IfFALSE, the table will be sharded according to the shard key specified in the giventypeId, or randomly sharded, if no shard key is specified. Note that a type containing a shard key cannot be used to create a replicated table. Supported values: The default value isFALSE.FOREIGN_KEYS: Semicolon-separated list of foreign keys, of the format '(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]'.FOREIGN_SHARD_KEY: Foreign shard key of the format 'source_column references shard_by_column from target_table(primary_key_column)'.PARTITION_TYPE: Partitioning scheme to use. Supported values:RANGE: Use range partitioning.INTERVAL: Use interval partitioning.LIST: Use list partitioning.HASH: Use hash partitioning.SERIES: Use series partitioning.
PARTITION_KEYS: Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined byPARTITION_DEFINITIONS.PARTITION_DEFINITIONS: Comma-separated list of partition definitions, whose format depends on the choice ofPARTITION_TYPE. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example formats.IS_AUTOMATIC_PARTITION: IfTRUE, a new partition will be created for values which don't fall into an existing partition. Currently only supported for list partitions. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified intableName.CHUNK_SIZE: Indicates the number of records per chunk to be used for this table.CHUNK_COLUMN_MAX_MEMORY: Indicates the target maximum data size for each column in a chunk to be used for this table.CHUNK_MAX_MEMORY: Indicates the target maximum data size for all columns in a chunk to be used for this table.IS_RESULT_TABLE: Indicates whether the table is a memory-only table. A result table cannot contain columns with text_search data-handling, and it will not be retained if the server is restarted. Supported values: The default value isFALSE.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for this table's columns.LOAD_VECTORS_POLICY: Set startup data loading scheme for the table. Supported values:ALWAYS: Load as much vector data as possible into memory before accepting requests.LAZY: Load the necessary vector data at start, and load the remainder lazily.ON_DEMAND: Load vector data as requests use it.SYSTEM: Load vector data using the system-configured default.
SYSTEM.BUILD_PK_INDEX_POLICY: Set startup primary-key index generation scheme for the table. Supported values:ALWAYS: Generate as much primary key index data as possible before accepting requests.LAZY: Generate the necessary primary key index data at start, and load the remainder lazily.ON_DEMAND: Generate primary key index data as requests use it.SYSTEM: Generate primary key index data using the system-configured default.
SYSTEM.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createTableExternal
public CreateTableExternalResponse createTableExternal(CreateTableExternalRequest request) throws GPUdbException
Creates a new external table, which is a local database object whose source data is located externally to the database. The source data can be located either in KiFS; on the cluster, accessible to the database; or remotely, accessible via a pre-defined external data source.The external table can have its structure defined explicitly, via
createTableOptions, which contains many of the options fromcreateTable; or defined implicitly, inferred from the source data.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createTableExternal
public CreateTableExternalResponse createTableExternal(String tableName, List<String> filepaths, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options) throws GPUdbException
Creates a new external table, which is a local database object whose source data is located externally to the database. The source data can be located either in KiFS; on the cluster, accessible to the database; or remotely, accessible via a pre-defined external data source.The external table can have its structure defined explicitly, via
createTableOptions, which contains many of the options fromcreateTable; or defined implicitly, inferred from the source data.- Parameters:
tableName- Name of the table to be created, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.filepaths- A list of file paths from which data will be sourced; For paths in KiFS, use the URI prefix of kifs:// followed by the path to a file or directory. File matching by prefix is supported, e.g. kifs://dir/file would match dir/file_1 and dir/file_2. When prefix matching is used, the path must start with a full, valid KiFS directory name. If an external data source is specified inDATASOURCE_NAME, these file paths must resolve to accessible files at that data source location. Prefix matching is supported. If the data source is hdfs, prefixes must be aligned with directories, i.e. partial file names will not match. If no data source is specified, the files are assumed to be local to the database and must all be accessible to the gpudb user, residing on the path (or relative to the path) specified by the external files directory in the Kinetica configuration file. Wildcards (*) can be used to specify a group of files. Prefix matching is supported, the prefixes must be aligned with directories. If the first path ends in .tsv, the text delimiter will be defaulted to a tab character. If the first path ends in .psv, the text delimiter will be defaulted to a pipe character (|).modifyColumns- Not implemented yet. The default value is an emptyMap.createTableOptions- Options fromcreateTable, allowing the structure of the table to be defined independently of the data source.TYPE_ID: ID of a currently registered type.NO_ERROR_IF_EXISTS: IfTRUE, prevents an error from occurring if the table already exists and is of the given type. If a table with the same name but a different type exists, it is still an error. Supported values: The default value isFALSE.IS_REPLICATED: Affects the distribution scheme for the table's data. IfTRUEand the given table has no explicit shard key defined, the table will be replicated. IfFALSE, the table will be sharded according to the shard key specified in the givenTYPE_ID, or randomly sharded, if no shard key is specified. Note that a type containing a shard key cannot be used to create a replicated table. Supported values: The default value isFALSE.FOREIGN_KEYS: Semicolon-separated list of foreign keys, of the format '(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]'.FOREIGN_SHARD_KEY: Foreign shard key of the format 'source_column references shard_by_column from target_table(primary_key_column)'.PARTITION_TYPE: Partitioning scheme to use. Supported values:RANGE: Use range partitioning.INTERVAL: Use interval partitioning.LIST: Use list partitioning.HASH: Use hash partitioning.SERIES: Use series partitioning.
PARTITION_KEYS: Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined byPARTITION_DEFINITIONS.PARTITION_DEFINITIONS: Comma-separated list of partition definitions, whose format depends on the choice ofPARTITION_TYPE. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example formats.IS_AUTOMATIC_PARTITION: IfTRUE, a new partition will be created for values which don't fall into an existing partition. Currently, only supported for list partitions. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified intableName.CHUNK_SIZE: Indicates the number of records per chunk to be used for this table.CHUNK_COLUMN_MAX_MEMORY: Indicates the target maximum data size for each column in a chunk to be used for this table.CHUNK_MAX_MEMORY: Indicates the target maximum data size for all columns in a chunk to be used for this table.IS_RESULT_TABLE: Indicates whether the table is a memory-only table. A result table cannot contain columns with text_search data-handling, and it will not be retained if the server is restarted. Supported values: The default value isFALSE.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for this table's columns.
Map.options- Optional parameters.BAD_RECORD_TABLE_NAME: Name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). WhenERROR_HANDLINGisABORT, bad records table is not populated.BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. The default value is '10000'.BAD_RECORD_TABLE_LIMIT_PER_INPUT: For subscriptions, a positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload. Default value will beBAD_RECORD_TABLE_LIMITand total size of the table per rank is limited toBAD_RECORD_TABLE_LIMIT.BATCH_SIZE: Number of records to insert per batch when inserting data. The default value is '50000'.COLUMN_FORMATS: For each target column specified, applies the column-property-bound format to the source data loaded into that column. Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, and datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'. SeeDEFAULT_COLUMN_FORMATSfor valid format syntax.COLUMNS_TO_LOAD: Specifies a comma-delimited list of columns from the source data to load. If more than one file is being loaded, this list applies to all files. Column numbers can be specified discretely or as a range. For example, a value of '5,7,1..3' will insert values from the fifth column in the source data into the first column in the target table, from the seventh column in the source data into the second column in the target table, and from the first through third columns in the source data into the third through fifth columns in the target table. If the source data contains a header, column names matching the file header names may be provided instead of column numbers. If the target table doesn't exist, the table will be created with the columns in this order. If the target table does exist with columns in a different order than the source data, this list can be used to match the order of the target table. For example, a value of 'C, B, A' will create a three column table with column C, followed by column B, followed by column A; or will insert those fields in that order into a table created with columns in that order. If the target table exists, the column names must match the source data field names for a name-mapping to be successful. Mutually exclusive withCOLUMNS_TO_SKIP.COLUMNS_TO_SKIP: Specifies a comma-delimited list of columns from the source data to skip. Mutually exclusive withCOLUMNS_TO_LOAD.COMPRESSION_TYPE: Source data compression type. Supported values:NONE: No compression.AUTO: Auto detect compression typeGZIP: gzip file compression.BZIP2: bzip2 file compression.
AUTO.DATASOURCE_NAME: Name of an existing external data source from which data file(s) specified infilepathswill be loadedDEFAULT_COLUMN_FORMATS: Specifies the default format to be applied to source data loaded into columns with the corresponding column property. Currently supported column properties include date, time, and datetime. This default column-property-bound format can be overridden by specifying a column property and format for a given target column inCOLUMN_FORMATS. For each specified annotation, the format will apply to all columns with that annotation unless a customCOLUMN_FORMATSfor that annotation is specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"DATALAKE_CATALOG: Name of an existing datalake(iceberg) catalog used in loading filesDATALAKE_PATH: Path of datalake(iceberg) objectDATALAKE_SNAPSHOT: Snapshot ID of datalake(iceberg) objectERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.IGNORE_BAD_RECORDS: Malformed records are skipped.ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
ABORT.EXTERNAL_TABLE_TYPE: Specifies whether the external table holds a local copy of the external data. Supported values:MATERIALIZED: Loads a copy of the external data into the database, refreshed on demandLOGICAL: External data will not be loaded into the database; the data will be retrieved from the source upon servicing each query against the external table
MATERIALIZED.FILE_TYPE: Specifies the type of the file(s) whose records will be inserted. Supported values:AVRO: Avro file formatDELIMITED_TEXT: Delimited text file format; e.g., CSV, TSV, PSV, etc.GDB: Esri/GDB file formatJSON: Json file formatPARQUET: Apache Parquet file formatSHAPEFILE: ShapeFile file format
DELIMITED_TEXT.FLATTEN_COLUMNS: Specifies how to handle nested columns. Supported values:TRUE: Break up nested columns to multiple columnsFALSE: Treat nested columns as json columns instead of flattening
FALSE.GDAL_CONFIGURATION_OPTIONS: Comma separated list of gdal conf options, for the specific requests: key=valueIGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. IfFALSE, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined byERROR_HANDLING. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore new records whose primary key values collide with those of existing recordsFALSE: Treat as errors any new records whose primary key values collide with those of existing records
FALSE.INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:FULL: Run a type inference on the source data (if needed) and ingestDRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode ofERROR_HANDLING.TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
FULL.JDBC_FETCH_SIZE: The JDBC fetch size, which determines how many rows to fetch per round trip. The default value is '50000'.KAFKA_CONSUMERS_PER_RANK: Number of Kafka consumer threads per rank (valid range 1-6). The default value is '1'.KAFKA_GROUP_ID: The group id to be used when consuming data from a Kafka topic (valid only for Kafka datasource subscriptions).KAFKA_OFFSET_RESET_POLICY: Policy to determine whether the Kafka data consumption starts either at earliest offset or latest offset. Supported values: The default value isEARLIEST.KAFKA_OPTIMISTIC_INGEST: Enable optimistic ingestion where Kafka topic offsets and table data are committed independently to achieve parallelism. Supported values: The default value isFALSE.KAFKA_SUBSCRIPTION_CANCEL_AFTER: Sets the Kafka subscription lifespan (in minutes). Expired subscription will be cancelled automatically.KAFKA_TYPE_INFERENCE_FETCH_TIMEOUT: Maximum time to collect Kafka messages before type inferencing on the set of them.LAYER: Geo files layer(s) name(s): comma separated.LOADING_MODE: Scheme for distributing the extraction and loading of data from the source data file(s). This option applies only when loading files that are local to the database. Supported values:HEAD: The head node loads all data. All files must be available to the head node.DISTRIBUTED_SHARED: The head node coordinates loading data by worker processes across all nodes from shared files available to all workers. NOTE: Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.DISTRIBUTED_LOCAL: A single worker process on each node loads all files that are available to it. This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data). NOTE: If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail. If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.
HEAD.LOCAL_TIME_OFFSET: Apply an offset to Avro local timestamp columns.MAX_RECORDS_TO_LOAD: Limit the number of records to load in this request: if this number is larger thanBATCH_SIZE, then the number of records loaded will be limited to the next whole number ofBATCH_SIZE(per working thread).NUM_TASKS_PER_RANK: Number of tasks for reading file per rank. Default will be system configuration parameter, external_file_reader_num_tasks.POLL_INTERVAL: IfTRUE, the number of seconds between attempts to load external files into the table. If zero, polling will be continuous as long as data is found. If no data is found, the interval will steadily increase to a maximum of 60 seconds. The default value is '0'.PRIMARY_KEYS: Comma separated list of column names to set as primary keys, when not specified in the type.REFRESH_METHOD: Method by which the table can be refreshed from its source data. Supported values:MANUAL: Refresh only occurs when manually requested by invoking the refresh action ofalterTableon this table.ON_START: Refresh table on database startup and when manually requested by invoking the refresh action ofalterTableon this table.
MANUAL.SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_MAX_CONSECUTIVE_CONNECTION_FAILURES: Max records to skip due to SR connection failures, before failingMAX_CONSECUTIVE_INVALID_SCHEMA_FAILURE: Max records to skip due to schema related errors, before failingSCHEMA_REGISTRY_SCHEMA_NAME: Name of the Avro schema in the schema registry to use when reading Avro records.SHARD_KEYS: Comma separated list of column names to set as shard keys, when not specified in the type.SKIP_LINES: Skip a number of lines from the beginning of the file.START_OFFSETS: Starting offsets by partition to fetch from kafka. A comma separated list of partition:offset pairs.SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value isFALSE.TABLE_INSERT_MODE: Insertion scheme to use when inserting records from multiple shapefiles. Supported values:SINGLE: Insert all records into a single table.TABLE_PER_FILE: Insert records from each file into a new table corresponding to that file.
SINGLE.TEXT_COMMENT_STRING: Specifies the character string that should be interpreted as a comment line prefix in the source data. All lines in the data starting with the provided string are ignored. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '#'.TEXT_DELIMITER: Specifies the character delimiting field values in the source data and field names in the header (if present). ForDELIMITED_TEXTFILE_TYPEonly. The default value is ','.TEXT_ESCAPE_CHARACTER: Specifies the character that is used to escape other characters in the source data. An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by an escape character will be interpreted as the ASCII bell, backspace, form feed, line feed, carriage return, horizontal tab, and vertical tab, respectively. For example, the escape character followed by an 'n' will be interpreted as a newline within a field value. The escape character can also be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not. ForDELIMITED_TEXTFILE_TYPEonly.TEXT_HAS_HEADER: Indicates whether the source data contains a header row. ForDELIMITED_TEXTFILE_TYPEonly. Supported values: The default value isTRUE.TEXT_HEADER_PROPERTY_DELIMITER: Specifies the delimiter for column properties in the header row (if present). Cannot be set to same value asTEXT_DELIMITER. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '|'.TEXT_NULL_STRING: Specifies the character string that should be interpreted as a null value in the source data. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '\N'.TEXT_QUOTE_CHARACTER: Specifies the character that should be interpreted as a field value quoting character in the source data. The character must appear at beginning and end of field value to take effect. Delimiters within quoted fields are treated as literals and not delimiters. Within a quoted field, two consecutive quote characters will be interpreted as a single literal quote character, effectively escaping it. To not have a quote character, specify an empty string. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '"'.TEXT_SEARCH_COLUMNS: Add 'text_search' property to internally inferenced string columns. Comma separated list of column names or '*' for all columns. To add 'text_search' property only to string columns greater than or equal to a minimum size, also set theTEXT_SEARCH_MIN_COLUMN_LENGTHTEXT_SEARCH_MIN_COLUMN_LENGTH: Set the minimum column size for strings to apply the 'text_search' property to. Used only whenTEXT_SEARCH_COLUMNShas a value.TRIM_SPACE: If set toTRUE, remove leading or trailing space from fields. Supported values: The default value isFALSE.TRUNCATE_STRINGS: If set toTRUE, truncate string values that are longer than the column's type size. Supported values: The default value isFALSE.TRUNCATE_TABLE: If set toTRUE, truncates the table specified bytableNameprior to loading the file(s). Supported values: The default value isFALSE.TYPE_INFERENCE_MAX_RECORDS_READTYPE_INFERENCE_MODE: Optimize type inferencing for either speed or accuracy. Supported values:ACCURACY: Scans data to get exactly-typed and sized columns for all data scanned.SPEED: Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned
SPEED.REMOTE_QUERY: Remote SQL query from which data will be sourcedREMOTE_QUERY_FILTER_COLUMN: Name of column to be used for splittingREMOTE_QUERYinto multiple sub-queries using the data distribution of given columnREMOTE_QUERY_INCREASING_COLUMN: Column on subscribed remote query result that will increase for new records (e.g., TIMESTAMP).REMOTE_QUERY_PARTITION_COLUMN: Alias name forREMOTE_QUERY_FILTER_COLUMN.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be 'upserted'). If set toFALSE, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined byIGNORE_EXISTING_PKandERROR_HANDLING. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createTableMonitor
public CreateTableMonitorResponse createTableMonitor(CreateTableMonitorRequest request) throws GPUdbException
Creates a monitor that watches for a single table modification event type (insert, update, or delete) on a particular table (identified bytableName) and forwards event notifications to subscribers via ZMQ. After this call completes, subscribe to the returnedtopicIdon the ZMQ table monitor port (default 9002). Each time an operation of the given type on the table completes, a multipart message is published for that topic; the first part contains only the topic ID, and each subsequent part contains one binary-encoded Avro object that corresponds to the event and can be decoded usingtypeSchema. The monitor will continue to run (regardless of whether or not there are any subscribers) until deactivated withclearTableMonitor.For more information on table monitors, see Table Monitors.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createTableMonitor
public CreateTableMonitorResponse createTableMonitor(String tableName, Map<String,String> options) throws GPUdbException
Creates a monitor that watches for a single table modification event type (insert, update, or delete) on a particular table (identified bytableName) and forwards event notifications to subscribers via ZMQ. After this call completes, subscribe to the returnedtopicIdon the ZMQ table monitor port (default 9002). Each time an operation of the given type on the table completes, a multipart message is published for that topic; the first part contains only the topic ID, and each subsequent part contains one binary-encoded Avro object that corresponds to the event and can be decoded usingtypeSchema. The monitor will continue to run (regardless of whether or not there are any subscribers) until deactivated withclearTableMonitor.For more information on table monitors, see Table Monitors.
- Parameters:
tableName- Name of the table to monitor, in [schema_name.]table_name format, using standard name resolution rules.options- Optional parameters.EVENT: Type of modification event on the target table to be monitored by this table monitor. Supported values:INSERT: Get notifications of new record insertions. The new row images are forwarded to the subscribers.UPDATE: Get notifications of update operations. The modified row count information is forwarded to the subscribers.DELETE: Get notifications of delete operations. The deleted row count information is forwarded to the subscribers.
INSERT.MONITOR_ID: ID to use for this monitor instead of a randomly generated oneDATASINK_NAME: Name of an existing data sink to send change data notifications toDESTINATION: Destination for the output data in format 'destination_type://path[:port]'. Supported destination types are 'http', 'https' and 'kafka'.KAFKA_TOPIC_NAME: Name of the Kafka topic to publish to ifDESTINATIONinoptionsis specified and is a Kafka brokerINCREASING_COLUMN: Column on subscribed table that will increase for new records (e.g., TIMESTAMP).EXPRESSION: Filter expression to limit records for notificationJOIN_TABLE_NAMES: A comma-separated list of tables (optionally with aliases) to include in the join. The monitored tabletableNamemust be included, representing only the newly inserted rows (deltas) since the last notification. Other tables can be any existing tables or views. Aliases can be used with the 'table_name as alias' syntax.JOIN_COLUMN_NAMES: A comma-separated list of columns or expressions to include from the joined tables. Column references can use table names or aliases defined in 'join_table_names'. Each column can optionally be aliased using 'as'. The selected columns will also appear in the notification output.JOIN_EXPRESSIONS: Optional filter or join expressions to apply when combining the tables. Expressions are standard SQL-style conditions and can reference any table or alias listed in 'join_table_names'. This corresponds to the WHERE clause of the underlying join, and can include conditions to filter the delta rows.REFRESH_METHOD: Method controlling when the table monitor reports changes to thetableName. Supported values:ON_CHANGE: Report changes as they occur.PERIODIC: Report changes periodically at rate specified byREFRESH_PERIOD.
ON_CHANGE.REFRESH_PERIOD: WhenREFRESH_METHODisPERIODIC, specifies the period in seconds at which changes are reported.REFRESH_START_TIME: WhenREFRESH_METHODisPERIODIC, specifies the first time at which changes are reported. Value is a datetime string with format 'YYYY-MM-DD HH:MM:SS'.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createTriggerByArea
public CreateTriggerByAreaResponse createTriggerByArea(CreateTriggerByAreaRequest request) throws GPUdbException
Sets up an area trigger mechanism for two column_names for one or more tables. (This function is essentially the two-dimensional version ofcreateTriggerByRange.) Once the trigger has been activated, any record added to the listed tables(s) viainsertRecordswith the chosen columns' values falling within the specified region will trip the trigger. All such records will be queued at the trigger port (by default '9001' but able to be retrieved viashowSystemStatus) for any listening client to collect. Active triggers can be cancelled by using theclearTriggerendpoint or by clearing all relevant tables.The output returns the trigger handle as well as indicating success or failure of the trigger activation.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createTriggerByArea
public CreateTriggerByAreaResponse createTriggerByArea(String requestId, List<String> tableNames, String xColumnName, List<Double> xVector, String yColumnName, List<Double> yVector, Map<String,String> options) throws GPUdbException
Sets up an area trigger mechanism for two column_names for one or more tables. (This function is essentially the two-dimensional version ofcreateTriggerByRange.) Once the trigger has been activated, any record added to the listed tables(s) viainsertRecordswith the chosen columns' values falling within the specified region will trip the trigger. All such records will be queued at the trigger port (by default '9001' but able to be retrieved viashowSystemStatus) for any listening client to collect. Active triggers can be cancelled by using theclearTriggerendpoint or by clearing all relevant tables.The output returns the trigger handle as well as indicating success or failure of the trigger activation.
- Parameters:
requestId- User-created ID for the trigger. The ID can be alphanumeric, contain symbols, and must contain at least one character.tableNames- Names of the tables on which the trigger will be activated and maintained, each in [schema_name.]table_name format, using standard name resolution rules.xColumnName- Name of a numeric column on which the trigger is activated. Usually 'x' for geospatial data points.xVector- The respective coordinate values for the region on which the trigger is activated. This usually translates to the x-coordinates of a geospatial region.yColumnName- Name of a second numeric column on which the trigger is activated. Usually 'y' for geospatial data points.yVector- The respective coordinate values for the region on which the trigger is activated. This usually translates to the y-coordinates of a geospatial region. Must be the same length as xvals.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createTriggerByRange
public CreateTriggerByRangeResponse createTriggerByRange(CreateTriggerByRangeRequest request) throws GPUdbException
Sets up a simple range trigger for a column_name for one or more tables. Once the trigger has been activated, any record added to the listed tables(s) viainsertRecordswith the chosen column_name's value falling within the specified range will trip the trigger. All such records will be queued at the trigger port (by default '9001' but able to be retrieved viashowSystemStatus) for any listening client to collect. Active triggers can be cancelled by using theclearTriggerendpoint or by clearing all relevant tables.The output returns the trigger handle as well as indicating success or failure of the trigger activation.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createTriggerByRange
public CreateTriggerByRangeResponse createTriggerByRange(String requestId, List<String> tableNames, String columnName, double min, double max, Map<String,String> options) throws GPUdbException
Sets up a simple range trigger for a column_name for one or more tables. Once the trigger has been activated, any record added to the listed tables(s) viainsertRecordswith the chosen column_name's value falling within the specified range will trip the trigger. All such records will be queued at the trigger port (by default '9001' but able to be retrieved viashowSystemStatus) for any listening client to collect. Active triggers can be cancelled by using theclearTriggerendpoint or by clearing all relevant tables.The output returns the trigger handle as well as indicating success or failure of the trigger activation.
- Parameters:
requestId- User-created ID for the trigger. The ID can be alphanumeric, contain symbols, and must contain at least one character.tableNames- Tables on which the trigger will be active, each in [schema_name.]table_name format, using standard name resolution rules.columnName- Name of a numeric column_name on which the trigger is activated.min- The lower bound (inclusive) for the trigger range.max- The upper bound (inclusive) for the trigger range.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createType
public CreateTypeResponse createType(CreateTypeRequest request) throws GPUdbException
Creates a new type describing the columns of a table. The type definition is specified as a list of columns, each specified as a list of the column name, data type, and any column attributes.Example of a type definition with some parameters:
[ ["id", "int8", "primary_key"], ["dept_id", "int8", "primary_key", "shard_key"], ["manager_id", "int8", "nullable"], ["first_name", "char32"], ["last_name", "char64"], ["salary", "decimal"], ["hire_date", "date"] ]Each column definition consists of the column name (which should meet the standard column naming criteria), the column's specific type (int, long, float, double, string, bytes, or any of the possible values forproperties), and any data handling, data key, or data replacement properties.Note that some properties are mutually exclusive--i.e. they cannot be specified for any given column simultaneously. One example of mutually exclusive properties are
PRIMARY_KEYandNULLABLE.A single primary key and/or single shard key can be set across one or more columns. If a primary key is specified, then a uniqueness constraint is enforced, in that only a single object can exist with a given primary key column value (or set of values for the key columns, if using a composite primary key). When
insertingdata into a table with a primary key, depending on the parameters in the request, incoming objects with primary key values that match existing objects will either overwrite (i.e. update) the existing object or will be skipped and not added into the set.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createType
public CreateTypeResponse createType(String typeDefinition, String label, Map<String,List<String>> properties, Map<String,String> options) throws GPUdbException
Creates a new type describing the columns of a table. The type definition is specified as a list of columns, each specified as a list of the column name, data type, and any column attributes.Example of a type definition with some parameters:
[ ["id", "int8", "primary_key"], ["dept_id", "int8", "primary_key", "shard_key"], ["manager_id", "int8", "nullable"], ["first_name", "char32"], ["last_name", "char64"], ["salary", "decimal"], ["hire_date", "date"] ]Each column definition consists of the column name (which should meet the standard column naming criteria), the column's specific type (int, long, float, double, string, bytes, or any of the possible values forproperties), and any data handling, data key, or data replacement properties.Note that some properties are mutually exclusive--i.e. they cannot be specified for any given column simultaneously. One example of mutually exclusive properties are
PRIMARY_KEYandNULLABLE.A single primary key and/or single shard key can be set across one or more columns. If a primary key is specified, then a uniqueness constraint is enforced, in that only a single object can exist with a given primary key column value (or set of values for the key columns, if using a composite primary key). When
insertingdata into a table with a primary key, depending on the parameters in the request, incoming objects with primary key values that match existing objects will either overwrite (i.e. update) the existing object or will be skipped and not added into the set.- Parameters:
typeDefinition- a JSON string describing the columns of the type to be registered, as described above.label- A user-defined description string which can be used to differentiate between tables and types with otherwise identical schemas.properties- [DEPRECATED--please use these property values in thetypeDefinitiondirectly, as described at the top, instead] Each key-value pair specifies the properties to use for a given column where the key is the column name. All keys used must be relevant column names for the given table. Specifying any property overrides the default properties for that column (which is based on the column's data type). Valid values are:DATA: Default property for all numeric and string type columns; makes the column available for GPU queries.TEXT_SEARCH: Valid only for select 'string' columns. Enables full text search--see Full Text Search for details and applicable string column types.TIMESTAMP: Valid only for 'long' columns. Indicates that this field represents a timestamp and will be provided in milliseconds since the Unix epoch: 00:00:00 Jan 1 1970. Dates represented by a timestamp must fall between the year 1000 and the year 2900.ULONG: Valid only for 'string' columns. It represents an unsigned long integer data type. The string can only be interpreted as an unsigned long data type with minimum value of zero, and maximum value of 18446744073709551615.UUID: Valid only for 'string' columns. It represents an uuid data type. Internally, it is stored as a 128-bit integer.DECIMAL: Valid only for 'string' columns. It represents a SQL type NUMERIC(19, 4) data type. There can be up to 15 digits before the decimal point and up to four digits in the fractional part. The value can be positive or negative (indicated by a minus sign at the beginning). This property is mutually exclusive with theTEXT_SEARCHproperty.DATE: Valid only for 'string' columns. Indicates that this field represents a date and will be provided in the format 'YYYY-MM-DD'. The allowable range is 1000-01-01 through 2900-01-01. This property is mutually exclusive with theTEXT_SEARCHproperty.TIME: Valid only for 'string' columns. Indicates that this field represents a time-of-day and will be provided in the format 'HH:MM:SS.mmm'. The allowable range is 00:00:00.000 through 23:59:59.999. This property is mutually exclusive with theTEXT_SEARCHproperty.DATETIME: Valid only for 'string' columns. Indicates that this field represents a datetime and will be provided in the format 'YYYY-MM-DD HH:MM:SS.mmm'. The allowable range is 1000-01-01 00:00:00.000 through 2900-01-01 23:59:59.999. This property is mutually exclusive with theTEXT_SEARCHproperty.CHAR1: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 1 character.CHAR2: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 2 characters.CHAR4: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 4 characters.CHAR8: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 8 characters.CHAR16: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 16 characters.CHAR32: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 32 characters.CHAR64: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 64 characters.CHAR128: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 128 characters.CHAR256: This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 256 characters.BOOLEAN: This property provides optimized memory and query performance for int columns. Ints with this property must be between 0 and 1(inclusive)INT8: This property provides optimized memory and query performance for int columns. Ints with this property must be between -128 and +127 (inclusive)INT16: This property provides optimized memory and query performance for int columns. Ints with this property must be between -32768 and +32767 (inclusive)IPV4: This property provides optimized memory, disk and query performance for string columns representing IPv4 addresses (i.e. 192.168.1.1). Strings with this property must be of the form: A.B.C.D where A, B, C and D are in the range of 0-255.ARRAY: Valid only for 'string' columns. Indicates that this field contains an array. The value type and (optionally) the item count should be specified in parenthesis; e.g., 'array(int, 10)' for a 10-integer array. Both 'array(int)' and 'array(int, -1)' will designate an unlimited-length integer array, though no bounds checking is performed on arrays of any length.JSON: Valid only for 'string' columns. Indicates that this field contains values in JSON format.VECTOR: Valid only for 'bytes' columns. Indicates that this field contains a vector of floats. The length should be specified in parenthesis, e.g., 'vector(1000)'.WKT: Valid only for 'string' and 'bytes' columns. Indicates that this field contains geospatial geometry objects in Well-Known Text (WKT) or Well-Known Binary (WKB) format.PRIMARY_KEY: This property indicates that this column will be part of (or the entire) primary key.SOFT_PRIMARY_KEY: This property indicates that this column will be part of (or the entire) soft primary key.SHARD_KEY: This property indicates that this column will be part of (or the entire) shard key.NULLABLE: This property indicates that this column is nullable. However, setting this property is insufficient for making the column nullable. The user must declare the type of the column as a union between its regular type and 'null' in the Avro schema for the record type intypeDefinition. For example, if a column is of type integer and is nullable, then the entry for the column in the Avro schema must be: ['int', 'null']. The C++, C#, Java, and Python APIs have built-in convenience for bypassing setting the Avro schema by hand. For those languages, one can use this property as usual and not have to worry about the Avro schema for the record.COMPRESS: This property indicates that this column should be compressed with the given codec and optional level; e.g., 'compress(snappy)' for Snappy compression and 'compress(zstd(7))' for zstd level 7 compression. This property is primarily used in order to save disk space.DICT: This property indicates that this column should be dictionary encoded. It can only be used in conjunction with restricted string (charN), int, long or date columns. Dictionary encoding is best for columns where the cardinality (the number of unique values) is expected to be low. This property can save a large amount of memory.INIT_WITH_NOW: For 'date', 'time', 'datetime', or 'timestamp' column types, replace empty strings and invalid timestamps with 'NOW()' upon insert.INIT_WITH_UUID: For 'uuid' type, replace empty strings and invalid UUID values with randomly-generated UUIDs upon insert.UPDATE_WITH_NOW: For 'date', 'time', 'datetime', or 'timestamp' column types, always update the field with 'NOW()' upon any update.
Map.options- Optional parameters.COMPRESSION_CODEC: The default compression codec for this type's columns.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createUnion
public CreateUnionResponse createUnion(CreateUnionRequest request) throws GPUdbException
Merges data from one or more tables with comparable data types into a new table.The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations, see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples, see Intersect. For limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see Except. For limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views on a single table, creates a single filtered view containing all of the unique records across all of the given filtered data sets.
Non-charN 'string' and 'bytes' column types cannot be merged, nor can columns marked as store-only.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createUnion
public CreateUnionResponse createUnion(String tableName, List<String> tableNames, List<List<String>> inputColumnNames, List<String> outputColumnNames, Map<String,String> options) throws GPUdbException
Merges data from one or more tables with comparable data types into a new table.The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations, see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples, see Intersect. For limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see Except. For limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views on a single table, creates a single filtered view containing all of the unique records across all of the given filtered data sets.
Non-charN 'string' and 'bytes' column types cannot be merged, nor can columns marked as store-only.
- Parameters:
tableName- Name of the table to be created, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.tableNames- The list of table names to merge, in [schema_name.]table_name format, using standard name resolution rules. Must contain the names of one or more existing tables.inputColumnNames- The list of columns from each of the corresponding input tables.outputColumnNames- The list of names of the columns to be stored in the output table.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place oftableName. IfPERSISTisFALSE(or unspecified), then this is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_TABLE_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the projection as part oftableNameand usecreateSchemato create the schema if non-existent] Name of the schema for the output table. If the schema provided is non-existent, it will be automatically created. The default value is ''.MODE: The mode describes what rows of the tables being unioned will be retained. Supported values:UNION_ALL: Retains all rows from the specified tables.UNION: Retains all unique rows from the specified tables (synonym forUNION_DISTINCT).UNION_DISTINCT: Retains all unique rows from the specified tables.EXCEPT: Retains all unique rows from the first table that do not appear in the second table (only works on 2 tables).EXCEPT_ALL: Retains all rows(including duplicates) from the first table that do not appear in the second table (only works on 2 tables).INTERSECT: Retains all unique rows that appear in both of the specified tables (only works on 2 tables).INTERSECT_ALL: Retains all rows(including duplicates) that appear in both of the specified tables (only works on 2 tables).
UNION_ALL.LONG_HASH: When true use 128 bit hash for union-distinct, except, except_all, intersect and intersect_all modes. Otherwise use 64 bit hash.CHUNK_SIZE: Indicates the number of records per chunk to be used for this output table.CHUNK_COLUMN_MAX_MEMORY: Indicates the target maximum data size for each column in a chunk to be used for this output table.CHUNK_MAX_MEMORY: Indicates the target maximum data size for all columns in a chunk to be used for this output table.CREATE_INDEXES: Comma-separated list of columns on which to create indexes on the output table. The columns specified must be present inoutputColumnNames.TTL: Sets the TTL of the output table specified intableName.PERSIST: IfTRUE, then the output table specified intableNamewill be persisted and will not expire unless aTTLis specified. IfFALSE, then the output table will be an in-memory table and will expire unless aTTLis specified otherwise. Supported values: The default value isFALSE.VIEW_ID: ID of view of which this output table is a member. The default value is ''.FORCE_REPLICATED: IfTRUE, then the output table specified intableNamewill be replicated even if the source tables are not. Supported values: The default value isFALSE.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for this table's columns.NO_COUNT: Return a count of 0 for the union table response to avoid the cost of counting; optimization needed for many chunk virtual_union's. The default value is 'false'.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createUserExternal
public CreateUserExternalResponse createUserExternal(CreateUserExternalRequest request) throws GPUdbException
Creates a new external user (a user whose credentials are managed by an external LDAP).Note: This method should be used for on-premise deployments only.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createUserExternal
public CreateUserExternalResponse createUserExternal(String name, Map<String,String> options) throws GPUdbException
Creates a new external user (a user whose credentials are managed by an external LDAP).Note: This method should be used for on-premise deployments only.
- Parameters:
name- Name of the user to be created. Must exactly match the user's name in the external LDAP, prefixed with a @. Must not be the same name as an existing user.options- Optional parameters.ACTIVATED: Is the user allowed to login. Supported values: The default value isTRUE.CREATE_HOME_DIRECTORY: WhenTRUE, a home directory in KiFS is created for this user. Supported values: The default value isTRUE.DEFAULT_SCHEMA: Default schema to associate with this userDIRECTORY_DATA_LIMIT: The maximum capacity to apply to the created directory ifCREATE_HOME_DIRECTORYisTRUE. Set to -1 to indicate no upper limit. If empty, the system default limit is applied.RESOURCE_GROUP: Name of an existing resource group to associate with this user
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createUserInternal
public CreateUserInternalResponse createUserInternal(CreateUserInternalRequest request) throws GPUdbException
Creates a new internal user (a user whose credentials are managed by the database system).- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createUserInternal
public CreateUserInternalResponse createUserInternal(String name, String password, Map<String,String> options) throws GPUdbException
Creates a new internal user (a user whose credentials are managed by the database system).- Parameters:
name- Name of the user to be created. Must contain only lowercase letters, digits, and underscores, and cannot begin with a digit. Must not be the same name as an existing user or role.password- Initial password of the user to be created. May be an empty string for no password.options- Optional parameters.ACTIVATED: Is the user allowed to login. Supported values: The default value isTRUE.CREATE_HOME_DIRECTORY: WhenTRUE, a home directory in KiFS is created for this user. Supported values: The default value isTRUE.DEFAULT_SCHEMA: Default schema to associate with this userDIRECTORY_DATA_LIMIT: The maximum capacity to apply to the created directory ifCREATE_HOME_DIRECTORYisTRUE. Set to -1 to indicate no upper limit. If empty, the system default limit is applied.RESOURCE_GROUP: Name of an existing resource group to associate with this user
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createVideo
public CreateVideoResponse createVideo(CreateVideoRequest request) throws GPUdbException
Creates a job to generate a sequence of raster images that visualize data over a specified time.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
createVideo
public CreateVideoResponse createVideo(String attribute, String begin, double durationSeconds, String end, double framesPerSecond, String style, String path, String styleParameters, Map<String,String> options) throws GPUdbException
Creates a job to generate a sequence of raster images that visualize data over a specified time.- Parameters:
attribute- The animated attribute to map to the video's frames. Must be present in the LAYERS specified for the visualization. This is often a time-related field but may be any numeric type.begin- The start point for the video. Accepts an expression evaluable over theattribute.durationSeconds- Seconds of video to produceend- The end point for the video. Accepts an expression evaluable over theattribute.framesPerSecond- The presentation frame rate of the encoded video in frames per second.style- The name of the visualize mode; should correspond to the schema used for thestyleParametersfield. Supported values:path- Fully-qualified KiFS path. Write access is required. A file must not exist at that path, unlessREPLACE_IF_EXISTSisTRUE.styleParameters- A string containing the JSON-encoded visualize request. Must correspond to the visualize mode specified in thestylefield.options- Optional parameters.TTL: Sets the TTL of the video.WINDOW: Specified using the data-type corresponding to theattribute. For a window of size W, a video frame rendered for time t will visualize data in the interval [t-W,t]. The minimum window size is the interval between successive frames. The minimum value is the default. If a value less than the minimum value is specified, it is replaced with the minimum window size. Larger values will make changes throughout the video appear more smooth while smaller values will capture fast variations in the data.NO_ERROR_IF_EXISTS: IfTRUE, does not return an error if the video already exists. Ignored ifREPLACE_IF_EXISTSisTRUE. Supported values: The default value isFALSE.REPLACE_IF_EXISTS: IfTRUE, deletes any existing video with the same path before creating a new video. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteDirectory
public DeleteDirectoryResponse deleteDirectory(DeleteDirectoryRequest request) throws GPUdbException
Deletes a directory from KiFS.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteDirectory
public DeleteDirectoryResponse deleteDirectory(String directoryName, Map<String,String> options) throws GPUdbException
Deletes a directory from KiFS.- Parameters:
directoryName- Name of the directory in KiFS to be deleted. The directory must contain no files, unlessRECURSIVEisTRUEoptions- Optional parameters.RECURSIVE: IfTRUE, will delete directory and all files residing in it. If false, directory must be empty for deletion. Supported values: The default value isFALSE.NO_ERROR_IF_NOT_EXISTS: IfTRUE, no error is returned if specified directory does not exist. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteFiles
public DeleteFilesResponse deleteFiles(DeleteFilesRequest request) throws GPUdbException
Deletes one or more files from KiFS.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteFiles
public DeleteFilesResponse deleteFiles(List<String> fileNames, Map<String,String> options) throws GPUdbException
Deletes one or more files from KiFS.- Parameters:
fileNames- An array of names of files to be deleted. File paths may contain wildcard characters after the KiFS directory delimiter. Accepted wildcard characters are asterisk (*) to represent any string of zero or more characters, and question mark (?) to indicate a single character.options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfTRUE, no error is returned if a specified file does not exist. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteGraph
public DeleteGraphResponse deleteGraph(DeleteGraphRequest request) throws GPUdbException
Deletes an existing graph from the graph server and/or persist.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteGraph
public DeleteGraphResponse deleteGraph(String graphName, Map<String,String> options) throws GPUdbException
Deletes an existing graph from the graph server and/or persist.- Parameters:
graphName- Name of the graph to be deleted.options- Optional parameters.DELETE_PERSIST: If set toTRUE, the graph is removed from the server and persist. If set toFALSE, the graph is removed from the server but is left in persist. The graph can be reloaded from persist if it is recreated with the same 'graph_name'. Supported values: The default value isTRUE.SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to get information about all the servers.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteProc
public DeleteProcResponse deleteProc(DeleteProcRequest request) throws GPUdbException
Deletes a proc. Any currently running instances of the proc will be killed.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteProc
public DeleteProcResponse deleteProc(String procName, Map<String,String> options) throws GPUdbException
Deletes a proc. Any currently running instances of the proc will be killed.- Parameters:
procName- Name of the proc to be deleted. Must be the name of a currently existing proc.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteRecords
public DeleteRecordsResponse deleteRecords(DeleteRecordsRequest request) throws GPUdbException
Deletes record(s) matching the provided criteria from the given table. The record selection criteria can either be one or moreexpressions(matching multiple records), a single record identified byRECORD_IDoptions, or all records when usingDELETE_ALL_RECORDS. Note that the three selection criteria are mutually exclusive. This operation cannot be run on a view. The operation is synchronous meaning that a response will not be available until the request is completely processed and all the matching records are deleted.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteRecords
public DeleteRecordsResponse deleteRecords(String tableName, List<String> expressions, Map<String,String> options) throws GPUdbException
Deletes record(s) matching the provided criteria from the given table. The record selection criteria can either be one or moreexpressions(matching multiple records), a single record identified byRECORD_IDoptions, or all records when usingDELETE_ALL_RECORDS. Note that the three selection criteria are mutually exclusive. This operation cannot be run on a view. The operation is synchronous meaning that a response will not be available until the request is completely processed and all the matching records are deleted.- Parameters:
tableName- Name of the table from which to delete records, in [schema_name.]table_name format, using standard name resolution rules. Must contain the name of an existing table; not applicable to views.expressions- A list of the actual predicates, one for each select; format should follow the guidelines provided here. Specifying one or moreexpressionsis mutually exclusive to specifyingRECORD_IDin theoptions.options- Optional parameters.GLOBAL_EXPRESSION: An optional global expression to reduce the search space of theexpressions. The default value is ''.RECORD_ID: A record ID identifying a single record, obtained at the time ofinsertion of the recordor by callinggetRecordsFromCollectionwith the *return_record_ids* option. This option cannot be used to delete records from replicated tables.DELETE_ALL_RECORDS: If set toTRUE, all records in the table will be deleted. If set toFALSE, then the option is effectively ignored. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteResourceGroup
public DeleteResourceGroupResponse deleteResourceGroup(DeleteResourceGroupRequest request) throws GPUdbException
Deletes a resource group.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteResourceGroup
public DeleteResourceGroupResponse deleteResourceGroup(String name, Map<String,String> options) throws GPUdbException
Deletes a resource group.- Parameters:
name- Name of the resource group to be deleted.options- Optional parameters.CASCADE_DELETE: IfTRUE, delete any existing entities owned by this group. Otherwise this request will return an error of any such entities exist. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteRole
public DeleteRoleResponse deleteRole(DeleteRoleRequest request) throws GPUdbException
Deletes an existing role.Note: This method should be used for on-premise deployments only.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteRole
public DeleteRoleResponse deleteRole(String name, Map<String,String> options) throws GPUdbException
Deletes an existing role.Note: This method should be used for on-premise deployments only.
- Parameters:
name- Name of the role to be deleted. Must be an existing role.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteUser
public DeleteUserResponse deleteUser(DeleteUserRequest request) throws GPUdbException
Deletes an existing user.Note: This method should be used for on-premise deployments only.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
deleteUser
public DeleteUserResponse deleteUser(String name, Map<String,String> options) throws GPUdbException
Deletes an existing user.Note: This method should be used for on-premise deployments only.
- Parameters:
name- Name of the user to be deleted. Must be an existing user.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
downloadFiles
public DownloadFilesResponse downloadFiles(DownloadFilesRequest request) throws GPUdbException
Downloads one or more files from KiFS.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
downloadFiles
public DownloadFilesResponse downloadFiles(List<String> fileNames, List<Long> readOffsets, List<Long> readLengths, Map<String,String> options) throws GPUdbException
Downloads one or more files from KiFS.- Parameters:
fileNames- An array of the file names to download from KiFS. File paths may contain wildcard characters after the KiFS directory delimiter. Accepted wildcard characters are asterisk (*) to represent any string of zero or more characters, and question mark (?) to indicate a single character.readOffsets- An array of starting byte offsets from which to read each respective file infileNames. Must either be empty or the same length asfileNames. If empty, files are downloaded in their entirety. If not empty,readLengthsmust also not be empty.readLengths- Array of number of bytes to read from each respective file infileNames. Must either be empty or the same length asfileNames. If empty, files are downloaded in their entirety. If not empty,readOffsetsmust also not be empty.options- Optional parameters.FILE_ENCODING: Encoding to be applied to the output file data. When using JSON serialization it is recommended to specify this asBASE64. Supported values:BASE64: Apply base64 encoding to the output file data.NONE: Do not apply any encoding to the output file data.
NONE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropBackup
public DropBackupResponse dropBackup(DropBackupRequest request) throws GPUdbException
Deletes one or more existing database backups and contained snapshots, accessible via the data sink specified bydatasinkName.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropBackup
public DropBackupResponse dropBackup(String backupName, String datasinkName, Map<String,String> options) throws GPUdbException
Deletes one or more existing database backups and contained snapshots, accessible via the data sink specified bydatasinkName.- Parameters:
backupName- Name of the backup to be deleted. An empty string or '*' will delete all existing backups. Any text followed by a '*' will delete backups whose name starts with that text. When deleting multiple backups,DELETE_ALL_BACKUPSmust be set toTRUE.datasinkName- Data sink through which the backup is accessible.options- Optional parameters.DRY_RUN: Whether or not to perform a dry run of a backup deletion. Supported values: The default value isFALSE.DELETE_ALL_BACKUPS: Allow multiple backups to be deleted ifTRUEand multiple backup names are found matchingbackupName. Supported values: The default value isFALSE.NO_ERROR_IF_NOT_EXISTS: Whether or not to suppress the error if the specified backup does not exist. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropCatalog
public DropCatalogResponse dropCatalog(DropCatalogRequest request) throws GPUdbException
Drops an existing catalog. Any external tables that depend on the catalog must be dropped before it can be dropped.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropCatalog
public DropCatalogResponse dropCatalog(String name, Map<String,String> options) throws GPUdbException
Drops an existing catalog. Any external tables that depend on the catalog must be dropped before it can be dropped.- Parameters:
name- Name of the catalog to be dropped. Must be an existing catalog.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropContainerRegistry
public DropContainerRegistryResponse dropContainerRegistry(DropContainerRegistryRequest request) throws GPUdbException
- Throws:
GPUdbException
-
dropContainerRegistry
public DropContainerRegistryResponse dropContainerRegistry(String registryName, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
dropCredential
public DropCredentialResponse dropCredential(DropCredentialRequest request) throws GPUdbException
Drop an existing credential.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropCredential
public DropCredentialResponse dropCredential(String credentialName, Map<String,String> options) throws GPUdbException
Drop an existing credential.- Parameters:
credentialName- Name of the credential to be dropped. Must be an existing credential.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropDatasink
public DropDatasinkResponse dropDatasink(DropDatasinkRequest request) throws GPUdbException
Drops an existing data sink.By default, if any table monitors use this sink as a destination, the request will be blocked unless option
CLEAR_TABLE_MONITORSisTRUE.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropDatasink
public DropDatasinkResponse dropDatasink(String name, Map<String,String> options) throws GPUdbException
Drops an existing data sink.By default, if any table monitors use this sink as a destination, the request will be blocked unless option
CLEAR_TABLE_MONITORSisTRUE.- Parameters:
name- Name of the data sink to be dropped. Must be an existing data sink.options- Optional parameters.CLEAR_TABLE_MONITORS: IfTRUE, any table monitors that use this data sink will be cleared. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropDatasource
public DropDatasourceResponse dropDatasource(DropDatasourceRequest request) throws GPUdbException
Drops an existing data source. Any external tables that depend on the data source must be dropped before it can be dropped.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropDatasource
public DropDatasourceResponse dropDatasource(String name, Map<String,String> options) throws GPUdbException
Drops an existing data source. Any external tables that depend on the data source must be dropped before it can be dropped.- Parameters:
name- Name of the data source to be dropped. Must be an existing data source.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropEnvironment
public DropEnvironmentResponse dropEnvironment(DropEnvironmentRequest request) throws GPUdbException
Drop an existing user-defined function (UDF) environment.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropEnvironment
public DropEnvironmentResponse dropEnvironment(String environmentName, Map<String,String> options) throws GPUdbException
Drop an existing user-defined function (UDF) environment.- Parameters:
environmentName- Name of the environment to be dropped. Must be an existing environment.options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfTRUEand if the environment specified inenvironmentNamedoes not exist, no error is returned. IfFALSEand if the environment specified inenvironmentNamedoes not exist, then an error is returned. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropModel
public DropModelResponse dropModel(DropModelRequest request) throws GPUdbException
- Throws:
GPUdbException
-
dropModel
public DropModelResponse dropModel(String modelName, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
dropSchema
public DropSchemaResponse dropSchema(DropSchemaRequest request) throws GPUdbException
Drops an existing SQL-style schema, specified inschemaName.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
dropSchema
public DropSchemaResponse dropSchema(String schemaName, Map<String,String> options) throws GPUdbException
Drops an existing SQL-style schema, specified inschemaName.- Parameters:
schemaName- Name of the schema to be dropped. Must be an existing schema.options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfTRUEand if the schema specified inschemaNamedoes not exist, no error is returned. IfFALSEand if the schema specified inschemaNamedoes not exist, then an error is returned. Supported values: The default value isFALSE.CASCADE: IfTRUE, all tables within the schema will be dropped. IfFALSE, the schema will be dropped only if empty. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
evaluateModel
public EvaluateModelResponse evaluateModel(EvaluateModelRequest request) throws GPUdbException
- Throws:
GPUdbException
-
evaluateModel
public EvaluateModelResponse evaluateModel(String modelName, int replicas, String deploymentMode, String sourceTable, String destinationTable, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
executeProc
public ExecuteProcResponse executeProc(ExecuteProcRequest request) throws GPUdbException
Executes a proc. This endpoint is asynchronous and does not wait for the proc to complete before returning.If the proc being executed is distributed,
inputTableNamesandinputColumnNamesmay be passed to the proc to use for reading data, andoutputTableNamesmay be passed to the proc to use for writing data.If the proc being executed is non-distributed, these table parameters will be ignored.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
executeProc
public ExecuteProcResponse executeProc(String procName, Map<String,String> params, Map<String,ByteBuffer> binParams, List<String> inputTableNames, Map<String,List<String>> inputColumnNames, List<String> outputTableNames, Map<String,String> options) throws GPUdbException
Executes a proc. This endpoint is asynchronous and does not wait for the proc to complete before returning.If the proc being executed is distributed,
inputTableNamesandinputColumnNamesmay be passed to the proc to use for reading data, andoutputTableNamesmay be passed to the proc to use for writing data.If the proc being executed is non-distributed, these table parameters will be ignored.
- Parameters:
procName- Name of the proc to execute. Must be the name of a currently existing proc.params- A map containing named parameters to pass to the proc. Each key/value pair specifies the name of a parameter and its value. The default value is an emptyMap.binParams- A map containing named binary parameters to pass to the proc. Each key/value pair specifies the name of a parameter and its value. The default value is an emptyMap.inputTableNames- Names of the tables containing data to be passed to the proc. Each name specified must be the name of a currently existing table, in [schema_name.]table_name format, using standard name resolution rules. If no table names are specified, no data will be passed to the proc. This parameter is ignored if the proc has a non-distributed execution mode. The default value is an emptyList.inputColumnNames- Map of table names frominputTableNamesto lists of names of columns from those tables that will be passed to the proc. Each column name specified must be the name of an existing column in the corresponding table. If a table name frominputTableNamesis not included, all columns from that table will be passed to the proc. This parameter is ignored if the proc has a non-distributed execution mode. The default value is an emptyMap.outputTableNames- Names of the tables to which output data from the proc will be written, each in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. If a specified table does not exist, it will automatically be created with the same schema as the corresponding table (by order) frominputTableNames, excluding any primary and shard keys. If a specified table is a non-persistent result table, it must not have primary or shard keys. If no table names are specified, no output data can be returned from the proc. This parameter is ignored if the proc has a non-distributed execution mode. The default value is an emptyList.options- Optional parameters.CACHE_INPUT: No longer supported; option will be ignored. The default value is ''.USE_CACHED_INPUT: No longer supported; option will be ignored. The default value is ''.RUN_TAG: A string that, if not empty, can be used in subsequent calls toshowProcStatusorkillProcto identify the proc instance. The default value is ''.MAX_OUTPUT_LINES: The maximum number of lines of output from stdout and stderr to return viashowProcStatus. If the number of lines output exceeds the maximum, earlier lines are discarded. The default value is '100'.EXECUTE_AT_STARTUP: IfTRUE, an instance of the proc will run when the database is started instead of running immediately. TherunIdcan be retrieved usingshowProcand used inshowProcStatus. Supported values: The default value isFALSE.EXECUTE_AT_STARTUP_AS: Sets the alternate user name to execute this proc instance as whenEXECUTE_AT_STARTUPisTRUE. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
executeSqlRaw
public RawExecuteSqlResponse executeSqlRaw(ExecuteSqlRequest request) throws GPUdbException
Execute a SQL statement (query, DML, or DDL).See SQL Support for the complete set of supported SQL commands.
When a caller wants all the results from a large query (e.g., more than max_get_records_size records), they can make multiple calls to this endpoint using the
offsetandlimitparameters to page through the results. Normally, this will execute thestatementquery each time. To avoid re-executing the query each time and to keep the results in the same order, the caller should specify aPAGING_TABLEname to hold the results of the query between calls and specify thePAGING_TABLEon subsequent calls. When this is done, the caller should clear the paging table and any other tables in theRESULT_TABLE_LIST(both returned in the response) when they are done paging through the results.pagingTable(andRESULT_TABLE_LIST) will be empty if no paging table was created (e.g., when all the query results were returned in the first call).- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
executeSql
public ExecuteSqlResponse executeSql(ExecuteSqlRequest request) throws GPUdbException
Execute a SQL statement (query, DML, or DDL).See SQL Support for the complete set of supported SQL commands.
When a caller wants all the results from a large query (e.g., more than max_get_records_size records), they can make multiple calls to this endpoint using the
offsetandlimitparameters to page through the results. Normally, this will execute thestatementquery each time. To avoid re-executing the query each time and to keep the results in the same order, the caller should specify aPAGING_TABLEname to hold the results of the query between calls and specify thePAGING_TABLEon subsequent calls. When this is done, the caller should clear the paging table and any other tables in theRESULT_TABLE_LIST(both returned in the response) when they are done paging through the results.pagingTable(andRESULT_TABLE_LIST) will be empty if no paging table was created (e.g., when all the query results were returned in the first call).- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
executeSql
public ExecuteSqlResponse executeSql(String statement, long offset, long limit, String requestSchemaStr, List<ByteBuffer> data, Map<String,String> options) throws GPUdbException
Execute a SQL statement (query, DML, or DDL).See SQL Support for the complete set of supported SQL commands.
When a caller wants all the results from a large query (e.g., more than max_get_records_size records), they can make multiple calls to this endpoint using the
offsetandlimitparameters to page through the results. Normally, this will execute thestatementquery each time. To avoid re-executing the query each time and to keep the results in the same order, the caller should specify aPAGING_TABLEname to hold the results of the query between calls and specify thePAGING_TABLEon subsequent calls. When this is done, the caller should clear the paging table and any other tables in theRESULT_TABLE_LIST(both returned in the response) when they are done paging through the results.pagingTable(andRESULT_TABLE_LIST) will be empty if no paging table was created (e.g., when all the query results were returned in the first call).- Parameters:
statement- SQL statement (query, DML, or DDL) to be executedoffset- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT.limit- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. UsehasMoreRecordsto see if more records exist in the result to be fetched, andoffsetandlimitto request subsequent pages of results. The default value is -9999.requestSchemaStr- Avro schema ofdata. The default value is ''.data- An array of binary-encoded data for the records to be binded to the SQL query. Or useQUERY_PARAMETERSto pass the data in JSON format. The default value is an emptyList.options- Optional parameters.COST_BASED_OPTIMIZATION: IfFALSE, disables the cost-based optimization of the given query. Supported values: The default value isFALSE.DISTRIBUTED_JOINS: IfTRUE, enables the use of distributed joins in servicing the given query. Any query requiring a distributed join will succeed, though hints can be used in the query to change the distribution of the source data to allow the query to succeed. Supported values: The default value isFALSE.DISTRIBUTED_OPERATIONS: IfTRUE, enables the use of distributed operations in servicing the given query. Any query requiring a distributed join will succeed, though hints can be used in the query to change the distribution of the source data to allow the query to succeed. Supported values: The default value isFALSE.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into or updating a table with a primary key, only used when primary key record collisions are rejected (UPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record insert/update that is rejected for resulting in a primary key collision with an existing table record will be ignored with no error generated. IfFALSE, the rejection of any insert/update for resulting in a primary key collision will cause an error to be reported. If the specified table does not have a primary key or ifUPDATE_ON_EXISTING_PKisTRUE, then this option has no effect. Supported values:TRUE: Ignore inserts/updates that result in primary key collisions with existing recordsFALSE: Treat as errors any inserts/updates that result in primary key collisions with existing records
FALSE.LATE_MATERIALIZATION: IfTRUE, Joins/Filters results will always be materialized ( saved to result tables format). Supported values: The default value isFALSE.PAGING_TABLE: When specified (orPAGING_TABLE_TTLis set), the system will create a paging table to hold the results of the query, when the output has more records than are in the response (i.e., whenhasMoreRecordsisTRUE). If the specified paging table exists, the records from the paging table are returned without re-evaluating the query. It is the caller's responsibility to clear thepagingTableand other tables in theRESULT_TABLE_LIST(both returned in the response) when they are done with this query.PAGING_TABLE_TTL: Sets the TTL of the paging table. -1 indicates no timeout. Setting this option will cause a paging table to be generated when needed. ThepagingTableand other tables in theRESULT_TABLE_LIST(both returned in the response) will be automatically cleared after the TTL expires, if set to a positive number. However, it is still recommended that the caller clear these tables when they are done with this query.PARALLEL_EXECUTION: IfFALSE, disables the parallel step execution of the given query. Supported values: The default value isTRUE.PLAN_CACHE: IfFALSE, disables plan caching for the given query. Supported values: The default value isTRUE.PREPARE_MODE: IfTRUE, compiles a query into an execution plan and saves it in query cache. Query execution is not performed and an empty response will be returned to user. Supported values: The default value isFALSE.PRESERVE_DICT_ENCODING: IfTRUE, then columns that were dict encoded in the source table will be dict encoded in the projection table. Supported values: The default value isTRUE.QUERY_PARAMETERS: Query parameters in JSON array or arrays (for inserting multiple rows). This can be used instead ofdataandrequestSchemaStr.RESULTS_CACHING: IfFALSE, disables caching of the results of the given query. Supported values: The default value isTRUE.RULE_BASED_OPTIMIZATION: IfFALSE, disables rule-based rewrite optimizations for the given query. Supported values: The default value isTRUE.SSQ_OPTIMIZATION: IfFALSE, scalar subqueries will be translated into joins. Supported values: The default value isTRUE.TTL: Sets the TTL of the intermediate result tables used in query execution.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into or updating a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted or updated will be replaced by that record. If set toFALSE, any such primary key collision will result in the insert/update being rejected and the error handled as determined byIGNORE_EXISTING_PK. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Replace the collided-into record with the record inserted or updated when a new/modified record causes a primary key collision with an existing recordFALSE: Reject the insert or update when it results in a primary key collision with an existing record
FALSE.VALIDATE_CHANGE_COLUMN: When changing a column using alter table, validate the change before applying it. IfTRUE, then validate all values. A value too large (or too long) for the new type will prevent any change. IfFALSE, then when a value is too large or long, it will be truncated. Supported values: The default value isTRUE.CURRENT_SCHEMA: Use the supplied value as the default schema when processing this SQL command.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
exportQueryMetrics
public ExportQueryMetricsResponse exportQueryMetrics(ExportQueryMetricsRequest request) throws GPUdbException
Export query metrics to a given destination. Returns query metrics.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
exportQueryMetrics
public ExportQueryMetricsResponse exportQueryMetrics(Map<String,String> options) throws GPUdbException
Export query metrics to a given destination. Returns query metrics.- Parameters:
options- Optional parameters.EXPRESSION: Filter for multi query exportFILEPATH: Path to export target specified as a filename or existing directory.FORMAT: Specifies which format to export the metrics. Supported values:JSON: Generic json outputJSON_TRACE_EVENT: Chromium/Perfetto trace event format
JSON.JOB_ID: Export query metrics for the currently running jobLIMIT: Record limit per file for multi query export
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
exportRecordsToFiles
public ExportRecordsToFilesResponse exportRecordsToFiles(ExportRecordsToFilesRequest request) throws GPUdbException
Export records from a table to files. All tables can be exported, in full or partial (seeCOLUMNS_TO_EXPORTandCOLUMNS_TO_SKIP). Additional filtering can be applied when using export table with expression through SQL. Default destination is KIFS, though other storage types (Azure, S3, GCS, and HDFS) are supported throughDATASINK_NAME; seecreateDatasink.Server's local file system is not supported. Default file format is delimited text. See options for different file types and different options for each file type. Table is saved to a single file if within max file size limits (may vary depending on datasink type). If not, then table is split into multiple files; these may be smaller than the max size limit.
All filenames created are returned in the response.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
exportRecordsToFiles
public ExportRecordsToFilesResponse exportRecordsToFiles(String tableName, String filepath, Map<String,String> options) throws GPUdbException
Export records from a table to files. All tables can be exported, in full or partial (seeCOLUMNS_TO_EXPORTandCOLUMNS_TO_SKIP). Additional filtering can be applied when using export table with expression through SQL. Default destination is KIFS, though other storage types (Azure, S3, GCS, and HDFS) are supported throughDATASINK_NAME; seecreateDatasink.Server's local file system is not supported. Default file format is delimited text. See options for different file types and different options for each file type. Table is saved to a single file if within max file size limits (may vary depending on datasink type). If not, then table is split into multiple files; these may be smaller than the max size limit.
All filenames created are returned in the response.
- Parameters:
tableName-filepath- Path to data export target. Iffilepathhas a file extension, it is read as the name of a file. Iffilepathis a directory, then the source table name with a random UUID appended will be used as the name of each exported file, all written to that directory. If filepath is a filename, then all exported files will have a random UUID appended to the given name. In either case, the target directory specified or implied must exist. The names of all exported files are returned in the response.options- Optional parameters.BATCH_SIZE: Number of records to be exported as a batch. The default value is '1000000'.COLUMN_FORMATS: For each source column specified, applies the column-property-bound format. Currently supported column properties include date, time, and datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'. SeeDEFAULT_COLUMN_FORMATSfor valid format syntax.COLUMNS_TO_EXPORT: Specifies a comma-delimited list of columns from the source table to export, written to the output file in the order they are given. Column names can be provided, in which case the target file will use those names as the column headers as well. Alternatively, column numbers can be specified--discretely or as a range. For example, a value of '5,7,1..3' will write values from the fifth column in the source table into the first column in the target file, from the seventh column in the source table into the second column in the target file, and from the first through third columns in the source table into the third through fifth columns in the target file. Mutually exclusive withCOLUMNS_TO_SKIP.COLUMNS_TO_SKIP: Comma-separated list of column names or column numbers to not export. All columns in the source table not specified will be written to the target file in the order they appear in the table definition. Mutually exclusive withCOLUMNS_TO_EXPORT.DATASINK_NAME: Datasink name, created usingcreateDatasink.DEFAULT_COLUMN_FORMATS: Specifies the default format to use to write data. Currently supported column properties include date, time, and datetime. This default column-property-bound format can be overridden by specifying a column property and format for a given source column inCOLUMN_FORMATS. For each specified annotation, the format will apply to all columns with that annotation unless customCOLUMN_FORMATSfor that annotation are specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to write text as "05/04/2000 12:12:11"EXPORT_DDL: Save DDL to a separate file. The default value is 'false'.FILE_EXTENSION: Extension to give the export file. The default value is '.csv'.FILE_TYPE: Specifies the file format to use when exporting data. Supported values:DELIMITED_TEXT: Delimited text file format; e.g., CSV, TSV, PSV, etc.PARQUET
DELIMITED_TEXT.KINETICA_HEADER: Whether to include a Kinetica proprietary header. Will not be written ifTEXT_HAS_HEADERisFALSE. Supported values: The default value isFALSE.KINETICA_HEADER_DELIMITER: If a Kinetica proprietary header is included, then specify a property separator. Different from column delimiter. The default value is '|'.COMPRESSION_TYPE: File compression type. GZip can be applied to text and Parquet files. Snappy can only be applied to Parquet files, and is the default compression for them. Supported values:SINGLE_FILE: Save records to a single file. This option may be ignored if file size exceeds internal file size limits (this limit will differ on different targets). Supported values: The default value isTRUE.SINGLE_FILE_MAX_SIZE: Max file size (in MB) to allow saving to a single file. May be overridden by target limitations. The default value is ''.TEXT_DELIMITER: Specifies the character to write out to delimit field values and field names in the header (if present). ForDELIMITED_TEXTFILE_TYPEonly. The default value is ','.TEXT_HAS_HEADER: Indicates whether to write out a header row. ForDELIMITED_TEXTFILE_TYPEonly. Supported values: The default value isTRUE.TEXT_NULL_STRING: Specifies the character string that should be written out for the null value in the data. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '\N'.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
exportRecordsToTable
public ExportRecordsToTableResponse exportRecordsToTable(ExportRecordsToTableRequest request) throws GPUdbException
Exports records from source table to the specified target table in an external database- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
exportRecordsToTable
public ExportRecordsToTableResponse exportRecordsToTable(String tableName, String remoteQuery, Map<String,String> options) throws GPUdbException
Exports records from source table to the specified target table in an external database- Parameters:
tableName- Name of the table from which the data will be exported to remote database, in [schema_name.]table_name format, using standard name resolution rules.remoteQuery- Parameterized insert query to export gpudb table data into remote database. The default value is ''.options- Optional parameters.BATCH_SIZE: Batch size, which determines how many rows to export per round trip. The default value is '200000'.DATASINK_NAME: Name of an existing external data sink to which table name specified intableNamewill be exportedJDBC_SESSION_INIT_STATEMENT: Executes the statement per each JDBC session before doing actual load. The default value is ''.JDBC_CONNECTION_INIT_STATEMENT: Executes the statement once before doing actual load. The default value is ''.REMOTE_TABLE: Name of the target table to which source table is exported. When this option is specified remote_query cannot be specified. The default value is ''.USE_ST_GEOMFROM_CASTS: Wraps parameterized variables with st_geomfromtext or st_geomfromwkb based on source column type. Supported values: The default value isFALSE.USE_INDEXED_PARAMETERS: Uses $n style syntax when generating insert query for remote_table option. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filter
public FilterResponse filter(FilterRequest request) throws GPUdbException
Filters data based on the specified expression. The results are stored in a result set with the givenviewName.For details see Expressions.
The response message contains the number of points for which the expression evaluated to be true, which is equivalent to the size of the result view.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filter
public FilterResponse filter(String tableName, String viewName, String expression, Map<String,String> options) throws GPUdbException
Filters data based on the specified expression. The results are stored in a result set with the givenviewName.For details see Expressions.
The response message contains the number of points for which the expression evaluated to be true, which is equivalent to the size of the result view.
- Parameters:
tableName- Name of the table to filter, in [schema_name.]table_name format, using standard name resolution rules. This may be the name of a table or a view (when chaining queries).viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.expression- The select expression to filter the specified table. For details see Expressions.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.VIEW_ID: view this filtered-view is part of. The default value is ''.TTL: Sets the TTL of the view specified inviewName.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByArea
public FilterByAreaResponse filterByArea(FilterByAreaRequest request) throws GPUdbException
Calculates which objects from a table are within a named area of interest (NAI/polygon). The operation is synchronous, meaning that a response will not be returned until all the matching objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input NAI restriction specification is created with the nameviewNamepassed in as part of the input.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByArea
public FilterByAreaResponse filterByArea(String tableName, String viewName, String xColumnName, List<Double> xVector, String yColumnName, List<Double> yVector, Map<String,String> options) throws GPUdbException
Calculates which objects from a table are within a named area of interest (NAI/polygon). The operation is synchronous, meaning that a response will not be returned until all the matching objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input NAI restriction specification is created with the nameviewNamepassed in as part of the input.- Parameters:
tableName- Name of the table to filter, in [schema_name.]table_name format, using standard name resolution rules. This may be the name of a table or a view (when chaining queries).viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.xColumnName- Name of the column containing the x values to be filtered.xVector- List of x coordinates of the vertices of the polygon representing the area to be filtered.yColumnName- Name of the column containing the y values to be filtered.yVector- List of y coordinates of the vertices of the polygon representing the area to be filtered.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema provided is non-existent, it will be automatically created.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByAreaGeometry
public FilterByAreaGeometryResponse filterByAreaGeometry(FilterByAreaGeometryRequest request) throws GPUdbException
Calculates which geospatial geometry objects from a table intersect a named area of interest (NAI/polygon). The operation is synchronous, meaning that a response will not be returned until all the matching objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input NAI restriction specification is created with the nameviewNamepassed in as part of the input.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByAreaGeometry
public FilterByAreaGeometryResponse filterByAreaGeometry(String tableName, String viewName, String columnName, List<Double> xVector, List<Double> yVector, Map<String,String> options) throws GPUdbException
Calculates which geospatial geometry objects from a table intersect a named area of interest (NAI/polygon). The operation is synchronous, meaning that a response will not be returned until all the matching objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input NAI restriction specification is created with the nameviewNamepassed in as part of the input.- Parameters:
tableName- Name of the table to filter, in [schema_name.]table_name format, using standard name resolution rules. This may be the name of a table or a view (when chaining queries).viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.columnName- Name of the geospatial geometry column to be filtered.xVector- List of x coordinates of the vertices of the polygon representing the area to be filtered.yVector- List of y coordinates of the vertices of the polygon representing the area to be filtered.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] The schema for the newly created view. If the schema is non-existent, it will be automatically created.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByBox
public FilterByBoxResponse filterByBox(FilterByBoxRequest request) throws GPUdbException
Calculates how many objects within the given table lie in a rectangular box. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set which satisfies the input NAI restriction specification is also created when aviewNameis passed in as part of the input payload.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByBox
public FilterByBoxResponse filterByBox(String tableName, String viewName, String xColumnName, double minX, double maxX, String yColumnName, double minY, double maxY, Map<String,String> options) throws GPUdbException
Calculates how many objects within the given table lie in a rectangular box. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set which satisfies the input NAI restriction specification is also created when aviewNameis passed in as part of the input payload.- Parameters:
tableName- Name of the table on which the bounding box operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.xColumnName- Name of the column on which to perform the bounding box query. Must be a valid numeric column.minX- Lower bound for the column chosen byxColumnName. Must be less than or equal tomaxX.maxX- Upper bound forxColumnName. Must be greater than or equal tominX.yColumnName- Name of a column on which to perform the bounding box query. Must be a valid numeric column.minY- Lower bound foryColumnName. Must be less than or equal tomaxY.maxY- Upper bound foryColumnName. Must be greater than or equal tominY.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByBoxGeometry
public FilterByBoxGeometryResponse filterByBoxGeometry(FilterByBoxGeometryRequest request) throws GPUdbException
Calculates which geospatial geometry objects from a table intersect a rectangular box. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set which satisfies the input NAI restriction specification is also created when aviewNameis passed in as part of the input payload.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByBoxGeometry
public FilterByBoxGeometryResponse filterByBoxGeometry(String tableName, String viewName, String columnName, double minX, double maxX, double minY, double maxY, Map<String,String> options) throws GPUdbException
Calculates which geospatial geometry objects from a table intersect a rectangular box. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set which satisfies the input NAI restriction specification is also created when aviewNameis passed in as part of the input payload.- Parameters:
tableName- Name of the table on which the bounding box operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.columnName- Name of the geospatial geometry column to be filtered.minX- Lower bound for the x-coordinate of the rectangular box. Must be less than or equal tomaxX.maxX- Upper bound for the x-coordinate of the rectangular box. Must be greater than or equal tominX.minY- Lower bound for the y-coordinate of the rectangular box. Must be less than or equal tomaxY.maxY- Upper bound for the y-coordinate of the rectangular box. Must be greater than or equal tominY.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema provided is non-existent, it will be automatically created.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByGeometry
public FilterByGeometryResponse filterByGeometry(FilterByGeometryRequest request) throws GPUdbException
Applies a geometry filter against a geospatial geometry column in a given table or view. The filtering geometry is provided byinputWkt.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByGeometry
public FilterByGeometryResponse filterByGeometry(String tableName, String viewName, String columnName, String inputWkt, String operation, Map<String,String> options) throws GPUdbException
Applies a geometry filter against a geospatial geometry column in a given table or view. The filtering geometry is provided byinputWkt.- Parameters:
tableName- Name of the table on which the filter by geometry will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table or view containing a geospatial geometry column.viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.columnName- Name of the column to be used in the filter. Must be a geospatial geometry column.inputWkt- A geometry in WKT format that will be used to filter the objects intableName. The default value is ''.operation- The geometric filtering operation to perform. Supported values:CONTAINS: Matches records that contain the given WKT ininputWkt, i.e. the given WKT is within the bounds of a record's geometry.CROSSES: Matches records that cross the given WKT.DISJOINT: Matches records that are disjoint from the given WKT.EQUALS: Matches records that are the same as the given WKT.INTERSECTS: Matches records that intersect the given WKT.OVERLAPS: Matches records that overlap the given WKT.TOUCHES: Matches records that touch the given WKT.WITHIN: Matches records that are within the given WKT.
options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema provided is non-existent, it will be automatically created.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByList
public FilterByListResponse filterByList(FilterByListRequest request) throws GPUdbException
Calculates which records from a table have values in the given list for the corresponding column. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input filter specification is also created if aviewNameis passed in as part of the request.For example, if a type definition has the columns 'x' and 'y', then a filter by list query with the column map {"x":["10.1", "2.3"], "y":["0.0", "-31.5", "42.0"]} will return the count of all data points whose x and y values match both in the respective x- and y-lists, e.g., "x = 10.1 and y = 0.0", "x = 2.3 and y = -31.5", etc. However, a record with "x = 10.1 and y = -31.5" or "x = 2.3 and y = 0.0" would not be returned because the values in the given lists do not correspond.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByList
public FilterByListResponse filterByList(String tableName, String viewName, Map<String,List<String>> columnValuesMap, Map<String,String> options) throws GPUdbException
Calculates which records from a table have values in the given list for the corresponding column. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input filter specification is also created if aviewNameis passed in as part of the request.For example, if a type definition has the columns 'x' and 'y', then a filter by list query with the column map {"x":["10.1", "2.3"], "y":["0.0", "-31.5", "42.0"]} will return the count of all data points whose x and y values match both in the respective x- and y-lists, e.g., "x = 10.1 and y = 0.0", "x = 2.3 and y = -31.5", etc. However, a record with "x = 10.1 and y = -31.5" or "x = 2.3 and y = 0.0" would not be returned because the values in the given lists do not correspond.
- Parameters:
tableName- Name of the table to filter, in [schema_name.]table_name format, using standard name resolution rules. This may be the name of a table or a view (when chaining queries).viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.columnValuesMap- List of values for the corresponding column in the tableoptions- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema provided is non-existent, it will be automatically created.FILTER_MODE: String indicating the filter mode, either 'in_list' or 'not_in_list'. Supported values:IN_LIST: The filter will match all items that are in the provided list(s).NOT_IN_LIST: The filter will match all items that are not in the provided list(s).
IN_LIST.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByRadius
public FilterByRadiusResponse filterByRadius(FilterByRadiusRequest request) throws GPUdbException
Calculates which objects from a table lie within a circle with the given radius and center point (i.e. circular NAI). The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input circular NAI restriction specification is also created if aviewNameis passed in as part of the request.For track data, all track points that lie within the circle plus one point on either side of the circle (if the track goes beyond the circle) will be included in the result.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByRadius
public FilterByRadiusResponse filterByRadius(String tableName, String viewName, String xColumnName, double xCenter, String yColumnName, double yCenter, double radius, Map<String,String> options) throws GPUdbException
Calculates which objects from a table lie within a circle with the given radius and center point (i.e. circular NAI). The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input circular NAI restriction specification is also created if aviewNameis passed in as part of the request.For track data, all track points that lie within the circle plus one point on either side of the circle (if the track goes beyond the circle) will be included in the result.
- Parameters:
tableName- Name of the table on which the filter by radius operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.xColumnName- Name of the column to be used for the x-coordinate (the longitude) of the center.xCenter- Value of the longitude of the center. Must be within [-180.0, 180.0]. The minimum allowed value is -180. The maximum allowed value is 180.yColumnName- Name of the column to be used for the y-coordinate-the latitude-of the center.yCenter- Value of the latitude of the center. Must be within [-90.0, 90.0]. The minimum allowed value is -90. The maximum allowed value is 90.radius- The radius of the circle within which the search will be performed. Must be a non-zero positive value. It is in meters; so, for example, a value of '42000' means 42 km. The minimum allowed value is 0. The maximum allowed value is MAX_INT.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema which is to contain the newly created view. If the schema is non-existent, it will be automatically created.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByRadiusGeometry
public FilterByRadiusGeometryResponse filterByRadiusGeometry(FilterByRadiusGeometryRequest request) throws GPUdbException
Calculates which geospatial geometry objects from a table intersect a circle with the given radius and center point (i.e. circular NAI). The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input circular NAI restriction specification is also created if aviewNameis passed in as part of the request.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByRadiusGeometry
public FilterByRadiusGeometryResponse filterByRadiusGeometry(String tableName, String viewName, String columnName, double xCenter, double yCenter, double radius, Map<String,String> options) throws GPUdbException
Calculates which geospatial geometry objects from a table intersect a circle with the given radius and center point (i.e. circular NAI). The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input circular NAI restriction specification is also created if aviewNameis passed in as part of the request.- Parameters:
tableName- Name of the table on which the filter by radius operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.columnName- Name of the geospatial geometry column to be filtered.xCenter- Value of the longitude of the center. Must be within [-180.0, 180.0]. The minimum allowed value is -180. The maximum allowed value is 180.yCenter- Value of the latitude of the center. Must be within [-90.0, 90.0]. The minimum allowed value is -90. The maximum allowed value is 90.radius- The radius of the circle within which the search will be performed. Must be a non-zero positive value. It is in meters; so, for example, a value of '42000' means 42 km. The minimum allowed value is 0. The maximum allowed value is MAX_INT.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema provided is non-existent, it will be automatically created.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByRange
public FilterByRangeResponse filterByRange(FilterByRangeRequest request) throws GPUdbException
Calculates which objects from a table have a column that is within the given bounds. An object from the table identified bytableNameis added to the viewviewNameif its column is within [lowerBound,upperBound] (inclusive). The operation is synchronous. The response provides a count of the number of objects which passed the bound filter. Although this functionality can also be accomplished with the standard filter function, it is more efficient.For track objects, the count reflects how many points fall within the given bounds (which may not include all the track points of any given track).
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByRange
public FilterByRangeResponse filterByRange(String tableName, String viewName, String columnName, double lowerBound, double upperBound, Map<String,String> options) throws GPUdbException
Calculates which objects from a table have a column that is within the given bounds. An object from the table identified bytableNameis added to the viewviewNameif its column is within [lowerBound,upperBound] (inclusive). The operation is synchronous. The response provides a count of the number of objects which passed the bound filter. Although this functionality can also be accomplished with the standard filter function, it is more efficient.For track objects, the count reflects how many points fall within the given bounds (which may not include all the track points of any given track).
- Parameters:
tableName- Name of the table on which the filter by range operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.columnName- Name of a column on which the operation would be applied.lowerBound- Value of the lower bound (inclusive).upperBound- Value of the upper bound (inclusive).options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterBySeries
public FilterBySeriesResponse filterBySeries(FilterBySeriesRequest request) throws GPUdbException
Filters objects matching all points of the given track (works only on track type data). It allows users to specify a particular track to find all other points in the table that fall within specified ranges (spatial and temporal) of all points of the given track. Additionally, the user can specify another track to see if the two intersect (or go close to each other within the specified ranges). The user also has the flexibility of using different metrics for the spatial distance calculation: Euclidean (flat geometry) or Great Circle (spherical geometry to approximate the Earth's surface distances). The filtered points are stored in a newly created result set. The return value of the function is the number of points in the resultant set (view).This operation is synchronous, meaning that a response will not be returned until all the objects are fully available.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterBySeries
public FilterBySeriesResponse filterBySeries(String tableName, String viewName, String trackId, List<String> targetTrackIds, Map<String,String> options) throws GPUdbException
Filters objects matching all points of the given track (works only on track type data). It allows users to specify a particular track to find all other points in the table that fall within specified ranges (spatial and temporal) of all points of the given track. Additionally, the user can specify another track to see if the two intersect (or go close to each other within the specified ranges). The user also has the flexibility of using different metrics for the spatial distance calculation: Euclidean (flat geometry) or Great Circle (spherical geometry to approximate the Earth's surface distances). The filtered points are stored in a newly created result set. The return value of the function is the number of points in the resultant set (view).This operation is synchronous, meaning that a response will not be returned until all the objects are fully available.
- Parameters:
tableName- Name of the table on which the filter by track operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be a currently existing table with a track present.viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.trackId- The ID of the track which will act as the filtering points. Must be an existing track within the given table.targetTrackIds- Up to one track ID to intersect with the "filter" track. If any provided, it must be an valid track ID within the given set.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.SPATIAL_RADIUS: A positive number passed as a string representing the radius of the search area centered around each track point's geospatial coordinates. The value is interpreted in meters. Required parameter. The minimum allowed value is '0'.TIME_RADIUS: A positive number passed as a string representing the maximum allowable time difference between the timestamps of a filtered object and the given track's points. The value is interpreted in seconds. Required parameter. The minimum allowed value is '0'.SPATIAL_DISTANCE_METRIC: A string representing the coordinate system to use for the spatial search criteria. Acceptable values are 'euclidean' and 'great_circle'. Optional parameter; default is 'euclidean'. Supported values:
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByString
public FilterByStringResponse filterByString(FilterByStringRequest request) throws GPUdbException
Calculates which objects from a table or view match a string expression for the given string columns. SettingCASE_SENSITIVEcan modify case sensitivity in matching for all modes exceptSEARCH. ForSEARCHmode details and limitations, see Full Text Search.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByString
public FilterByStringResponse filterByString(String tableName, String viewName, String expression, String mode, List<String> columnNames, Map<String,String> options) throws GPUdbException
Calculates which objects from a table or view match a string expression for the given string columns. SettingCASE_SENSITIVEcan modify case sensitivity in matching for all modes exceptSEARCH. ForSEARCHmode details and limitations, see Full Text Search.- Parameters:
tableName- Name of the table on which the filter operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table or view.viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.expression- The expression with which to filter the table.mode- The string filtering mode to apply. See below for details. Supported values:SEARCH: Full text search query with wildcards and boolean operators. Note that for this mode, no column can be specified incolumnNames; all string columns of the table that have text search enabled will be searched.EQUALS: Exact whole-string match (accelerated).CONTAINS: Partial substring match (not accelerated). If the column is a string type (non-charN) and the number of records is too large, it will return 0.STARTS_WITH: Strings that start with the given expression (not accelerated). If the column is a string type (non-charN) and the number of records is too large, it will return 0.REGEX: Full regular expression search (not accelerated). If the column is a string type (non-charN) and the number of records is too large, it will return 0.
columnNames- List of columns on which to apply the filter. Ignored forSEARCHmode.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.CASE_SENSITIVE: IfFALSEthen string filtering will ignore case. Does not apply toSEARCHmode. Supported values: The default value isTRUE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByTable
public FilterByTableResponse filterByTable(FilterByTableRequest request) throws GPUdbException
Filters objects in one table based on objects in another table. The user must specify matching column types from the two tables (i.e. the target table from which objects will be filtered and the source table based on which the filter will be created); the column names need not be the same. If aviewNameis specified, then the filtered objects will then be put in a newly created view. The operation is synchronous, meaning that a response will not be returned until all objects are fully available in the result view. The return value contains the count (i.e. the size) of the resulting view.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByTable
public FilterByTableResponse filterByTable(String tableName, String viewName, String columnName, String sourceTableName, String sourceTableColumnName, Map<String,String> options) throws GPUdbException
Filters objects in one table based on objects in another table. The user must specify matching column types from the two tables (i.e. the target table from which objects will be filtered and the source table based on which the filter will be created); the column names need not be the same. If aviewNameis specified, then the filtered objects will then be put in a newly created view. The operation is synchronous, meaning that a response will not be returned until all objects are fully available in the result view. The return value contains the count (i.e. the size) of the resulting view.- Parameters:
tableName- Name of the table whose data will be filtered, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.columnName- Name of the column by whose value the data will be filtered from the table designated bytableName.sourceTableName- Name of the table whose data will be compared against in the table calledtableName, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.sourceTableColumnName- Name of the column in thesourceTableNamewhose values will be used as the filter for tabletableName. Must be a geospatial geometry column if in 'spatial' mode; otherwise, Must match the type of thecolumnName.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.FILTER_MODE: String indicating the filter mode, eitherIN_TABLEorNOT_IN_TABLE. Supported values: The default value isIN_TABLE.MODE: Mode - should be eitherSPATIALorNORMAL. Supported values: The default value isNORMAL.BUFFER: Buffer size, in meters. Only relevant forSPATIALmode. The default value is '0'.BUFFER_METHOD: Method used to buffer polygons. Only relevant forSPATIALmode. Supported values: The default value isNORMAL.MAX_PARTITION_SIZE: Maximum number of points in a partition. Only relevant forSPATIALmode. The default value is '0'.MAX_PARTITION_SCORE: Maximum number of points * edges in a partition. Only relevant forSPATIALmode. The default value is '8000000'.X_COLUMN_NAME: Name of column containing x value of point being filtered inSPATIALmode. The default value is 'x'.Y_COLUMN_NAME: Name of column containing y value of point being filtered inSPATIALmode. The default value is 'y'.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByValue
public FilterByValueResponse filterByValue(FilterByValueRequest request) throws GPUdbException
Calculates which objects from a table has a particular value for a particular column. The input parameters provide a way to specify either a String or a Double valued column and a desired value for the column on which the filter is performed. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new result view which satisfies the input filter restriction specification is also created with a view name passed in as part of the input payload. Although this functionality can also be accomplished with the standard filter function, it is more efficient.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
filterByValue
public FilterByValueResponse filterByValue(String tableName, String viewName, boolean isString, double value, String valueStr, String columnName, Map<String,String> options) throws GPUdbException
Calculates which objects from a table has a particular value for a particular column. The input parameters provide a way to specify either a String or a Double valued column and a desired value for the column on which the filter is performed. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new result view which satisfies the input filter restriction specification is also created with a view name passed in as part of the input payload. Although this functionality can also be accomplished with the standard filter function, it is more efficient.- Parameters:
tableName- Name of an existing table on which to perform the calculation, in [schema_name.]table_name format, using standard name resolution rules.viewName- If provided, then this will be the name of the view containing the results, in [schema_name.]view_name format, using standard name resolution rules and meeting table naming criteria. Must not be an already existing table or view. The default value is ''.isString- Indicates whether the value being searched for is string or numeric.value- The value to search for. The default value is 0.valueStr- The string value to search for. The default value is ''.columnName- Name of a column on which the filter by value would be applied.options- Optional parameters.CREATE_TEMP_TABLE: IfTRUE, a unique temporary table name will be generated in the sys_temp schema and used in place ofviewName. This is always allowed even if the caller does not have permission to create tables. The generated name is returned inQUALIFIED_VIEW_NAME. Supported values: The default value isFALSE.COLLECTION_NAME: [DEPRECATED--please specify the containing schema for the view as part ofviewNameand usecreateSchemato create the schema if non-existent] Name of a schema for the newly created view. If the schema is non-existent, it will be automatically created.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getJob
public GetJobResponse getJob(GetJobRequest request) throws GPUdbException
Get the status and result of asynchronously running job. See thecreateJobfor starting an asynchronous job. Some fields of the response are filled only after the submitted job has finished execution.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getJob
public GetJobResponse getJob(long jobId, Map<String,String> options) throws GPUdbException
Get the status and result of asynchronously running job. See thecreateJobfor starting an asynchronous job. Some fields of the response are filled only after the submitted job has finished execution.- Parameters:
jobId- A unique identifier for the job whose status and result is to be fetched.options- Optional parameters.JOB_TAG: Job tag returned in call to create the job
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecordsRaw
public RawGetRecordsResponse getRecordsRaw(GetRecordsRequest request) throws GPUdbException
Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column. This operation can be performed on tables and views. Records can be returned encoded as binary, json, or geojson.This operation supports paging through the data via the
offsetandlimitparameters. Note that when paging through a table, if the table (or the underlying table in case of a view) is updated (records are inserted, deleted or modified) the records retrieved may differ between calls based on the updates applied.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecords
public <TResponse> GetRecordsResponse<TResponse> getRecords(Object typeDescriptor, GetRecordsRequest request) throws GPUdbException
Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column. This operation can be performed on tables and views. Records can be returned encoded as binary, json, or geojson.This operation supports paging through the data via the
offsetandlimitparameters. Note that when paging through a table, if the table (or the underlying table in case of a view) is updated (records are inserted, deleted or modified) the records retrieved may differ between calls based on the updates applied.- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
typeDescriptor- Type descriptor used for decoding returned objects.request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
IllegalArgumentException- iftypeDescriptoris not an instance of one of the following:Type,TypeObjectMap,Schema, or aClassthat implementsIndexedRecordGPUdbException- if an error occurs during the operation.
-
getRecords
public <TResponse> GetRecordsResponse<TResponse> getRecords(Object typeDescriptor, String tableName, long offset, long limit, Map<String,String> options) throws GPUdbException
Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column. This operation can be performed on tables and views. Records can be returned encoded as binary, json, or geojson.This operation supports paging through the data via the
offsetandlimitparameters. Note that when paging through a table, if the table (or the underlying table in case of a view) is updated (records are inserted, deleted or modified) the records retrieved may differ between calls based on the updates applied.- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
typeDescriptor- Type descriptor used for decoding returned objects.tableName- Name of the table or view from which the records will be fetched, in [schema_name.]table_name format, using standard name resolution rules.offset- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT.limit- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. UsehasMoreRecordsto see if more records exist in the result to be fetched, andoffsetandlimitto request subsequent pages of results. The default value is -9999.options-EXPRESSION: Optional filter expression to apply to the table.FAST_INDEX_LOOKUP: Indicates if indexes should be used to perform the lookup for a given expression if possible. Only applicable if there is no sorting, the expression contains only equivalence comparisons based on existing tables indexes and the range of requested values is from [0 to END_OF_SET]. Supported values: The default value isTRUE.SORT_BY: Optional column that the data should be sorted by. Empty by default (i.e. no sorting is applied).SORT_ORDER: String indicating how the returned values should be sorted - ascending or descending. If sort_order is provided, sort_by has to be provided. Supported values: The default value isASCENDING.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
IllegalArgumentException- iftypeDescriptoris not an instance of one of the following:Type,TypeObjectMap,Schema, or aClassthat implementsIndexedRecordGPUdbException- if an error occurs during the operation.
-
getRecords
public <TResponse> GetRecordsResponse<TResponse> getRecords(GetRecordsRequest request) throws GPUdbException
Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column. This operation can be performed on tables and views. Records can be returned encoded as binary, json, or geojson.This operation supports paging through the data via the
offsetandlimitparameters. Note that when paging through a table, if the table (or the underlying table in case of a view) is updated (records are inserted, deleted or modified) the records retrieved may differ between calls based on the updates applied.- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecords
public <TResponse> GetRecordsResponse<TResponse> getRecords(String tableName, long offset, long limit, Map<String,String> options) throws GPUdbException
Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column. This operation can be performed on tables and views. Records can be returned encoded as binary, json, or geojson.This operation supports paging through the data via the
offsetandlimitparameters. Note that when paging through a table, if the table (or the underlying table in case of a view) is updated (records are inserted, deleted or modified) the records retrieved may differ between calls based on the updates applied.- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
tableName- Name of the table or view from which the records will be fetched, in [schema_name.]table_name format, using standard name resolution rules.offset- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT.limit- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. UsehasMoreRecordsto see if more records exist in the result to be fetched, andoffsetandlimitto request subsequent pages of results. The default value is -9999.options-EXPRESSION: Optional filter expression to apply to the table.FAST_INDEX_LOOKUP: Indicates if indexes should be used to perform the lookup for a given expression if possible. Only applicable if there is no sorting, the expression contains only equivalence comparisons based on existing tables indexes and the range of requested values is from [0 to END_OF_SET]. Supported values: The default value isTRUE.SORT_BY: Optional column that the data should be sorted by. Empty by default (i.e. no sorting is applied).SORT_ORDER: String indicating how the returned values should be sorted - ascending or descending. If sort_order is provided, sort_by has to be provided. Supported values: The default value isASCENDING.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecordsByColumnRaw
public RawGetRecordsByColumnResponse getRecordsByColumnRaw(GetRecordsByColumnRequest request) throws GPUdbException
For a given table, retrieves the values from the requested column(s). Maps of column name to the array of values as well as the column data type are returned. This endpoint supports pagination with theoffsetandlimitparameters.Window functions, which can perform operations like moving averages, are available through this endpoint as well as
createProjection.When using pagination, if the table (or the underlying table in the case of a view) is modified (records are inserted, updated, or deleted) during a call to the endpoint, the records or values retrieved may differ between calls based on the type of the update, e.g., the contiguity across pages cannot be relied upon.
If
tableNameis empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecordsByColumn
public GetRecordsByColumnResponse getRecordsByColumn(GetRecordsByColumnRequest request) throws GPUdbException
For a given table, retrieves the values from the requested column(s). Maps of column name to the array of values as well as the column data type are returned. This endpoint supports pagination with theoffsetandlimitparameters.Window functions, which can perform operations like moving averages, are available through this endpoint as well as
createProjection.When using pagination, if the table (or the underlying table in the case of a view) is modified (records are inserted, updated, or deleted) during a call to the endpoint, the records or values retrieved may differ between calls based on the type of the update, e.g., the contiguity across pages cannot be relied upon.
If
tableNameis empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecordsByColumn
public GetRecordsByColumnResponse getRecordsByColumn(String tableName, List<String> columnNames, long offset, long limit, Map<String,String> options) throws GPUdbException
For a given table, retrieves the values from the requested column(s). Maps of column name to the array of values as well as the column data type are returned. This endpoint supports pagination with theoffsetandlimitparameters.Window functions, which can perform operations like moving averages, are available through this endpoint as well as
createProjection.When using pagination, if the table (or the underlying table in the case of a view) is modified (records are inserted, updated, or deleted) during a call to the endpoint, the records or values retrieved may differ between calls based on the type of the update, e.g., the contiguity across pages cannot be relied upon.
If
tableNameis empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
- Parameters:
tableName- Name of the table or view on which this operation will be performed, in [schema_name.]table_name format, using standard name resolution rules. An empty table name retrieves one record from a single-row virtual table, where columns specified should be constants or constant expressions.columnNames- The list of column values to retrieve.offset- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT.limit- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. UsehasMoreRecordsto see if more records exist in the result to be fetched, andoffsetandlimitto request subsequent pages of results. The default value is -9999.options-EXPRESSION: Optional filter expression to apply to the table.SORT_BY: Optional column that the data should be sorted by. Used in conjunction withSORT_ORDER. TheORDER_BYoption can be used in lieu ofSORT_BY/SORT_ORDER. The default value is ''.SORT_ORDER: String indicating how the returned values should be sorted -ASCENDINGorDESCENDING. IfSORT_ORDERis provided,SORT_BYhas to be provided. Supported values: The default value isASCENDING.ORDER_BY: Comma-separated list of the columns to be sorted by as well as the sort direction, e.g., 'timestamp asc, x desc'. The default value is ''.CONVERT_WKTS_TO_WKBS: IfTRUE, then WKT string columns will be returned as WKB bytes. Supported values: The default value isFALSE.ROUTE_TO_TOM: For multihead record retrieval without shard key expression - specifies from which tom to retrieve data.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecordsBySeriesRaw
public RawGetRecordsBySeriesResponse getRecordsBySeriesRaw(GetRecordsBySeriesRequest request) throws GPUdbException
Retrieves the complete series/track records from the givenworldTableNamebased on the partial track information contained in thetableName.This operation supports paging through the data via the
offsetandlimitparameters.In contrast to
getRecordsRawthis returns records grouped by series/track. So ifoffsetis 0 andlimitis 5 this operation would return the first 5 series/tracks intableName. Each series/track will be returned sorted by their TIMESTAMP column.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecordsBySeries
public <TResponse> GetRecordsBySeriesResponse<TResponse> getRecordsBySeries(Object typeDescriptor, GetRecordsBySeriesRequest request) throws GPUdbException
Retrieves the complete series/track records from the givenworldTableNamebased on the partial track information contained in thetableName.This operation supports paging through the data via the
offsetandlimitparameters.In contrast to
getRecordsthis returns records grouped by series/track. So ifoffsetis 0 andlimitis 5 this operation would return the first 5 series/tracks intableName. Each series/track will be returned sorted by their TIMESTAMP column.- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
typeDescriptor- Type descriptor used for decoding returned objects.request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
IllegalArgumentException- iftypeDescriptoris not an instance of one of the following:Type,TypeObjectMap,Schema, or aClassthat implementsIndexedRecordGPUdbException- if an error occurs during the operation.
-
getRecordsBySeries
public <TResponse> GetRecordsBySeriesResponse<TResponse> getRecordsBySeries(Object typeDescriptor, String tableName, String worldTableName, int offset, int limit, Map<String,String> options) throws GPUdbException
Retrieves the complete series/track records from the givenworldTableNamebased on the partial track information contained in thetableName.This operation supports paging through the data via the
offsetandlimitparameters.In contrast to
getRecordsthis returns records grouped by series/track. So ifoffsetis 0 andlimitis 5 this operation would return the first 5 series/tracks intableName. Each series/track will be returned sorted by their TIMESTAMP column.- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
typeDescriptor- Type descriptor used for decoding returned objects.tableName- Name of the table or view for which series/tracks will be fetched, in [schema_name.]table_name format, using standard name resolution rules.worldTableName- Name of the table containing the complete series/track information to be returned for the tracks present in thetableName, in [schema_name.]table_name format, using standard name resolution rules. Typically this is used when retrieving series/tracks from a view (which contains partial series/tracks) but the user wants to retrieve the entire original series/tracks. Can be blank.offset- A positive integer indicating the number of initial series/tracks to skip (useful for paging through the results). The default value is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT.limit- A positive integer indicating the maximum number of series/tracks to be returned. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The default value is 250.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
IllegalArgumentException- iftypeDescriptoris not an instance of one of the following:Type,TypeObjectMap,Schema, or aClassthat implementsIndexedRecordGPUdbException- if an error occurs during the operation.
-
getRecordsBySeries
public <TResponse> GetRecordsBySeriesResponse<TResponse> getRecordsBySeries(GetRecordsBySeriesRequest request) throws GPUdbException
Retrieves the complete series/track records from the givenworldTableNamebased on the partial track information contained in thetableName.This operation supports paging through the data via the
offsetandlimitparameters.In contrast to
getRecordsthis returns records grouped by series/track. So ifoffsetis 0 andlimitis 5 this operation would return the first 5 series/tracks intableName. Each series/track will be returned sorted by their TIMESTAMP column.- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecordsBySeries
public <TResponse> GetRecordsBySeriesResponse<TResponse> getRecordsBySeries(String tableName, String worldTableName, int offset, int limit, Map<String,String> options) throws GPUdbException
Retrieves the complete series/track records from the givenworldTableNamebased on the partial track information contained in thetableName.This operation supports paging through the data via the
offsetandlimitparameters.In contrast to
getRecordsthis returns records grouped by series/track. So ifoffsetis 0 andlimitis 5 this operation would return the first 5 series/tracks intableName. Each series/track will be returned sorted by their TIMESTAMP column.- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
tableName- Name of the table or view for which series/tracks will be fetched, in [schema_name.]table_name format, using standard name resolution rules.worldTableName- Name of the table containing the complete series/track information to be returned for the tracks present in thetableName, in [schema_name.]table_name format, using standard name resolution rules. Typically this is used when retrieving series/tracks from a view (which contains partial series/tracks) but the user wants to retrieve the entire original series/tracks. Can be blank.offset- A positive integer indicating the number of initial series/tracks to skip (useful for paging through the results). The default value is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT.limit- A positive integer indicating the maximum number of series/tracks to be returned. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The default value is 250.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecordsFromCollectionRaw
public RawGetRecordsFromCollectionResponse getRecordsFromCollectionRaw(GetRecordsFromCollectionRequest request) throws GPUdbException
Retrieves records from a collection. The operation can optionally return the record IDs which can be used in certain queries such asdeleteRecords.This operation supports paging through the data via the
offsetandlimitparameters.Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecordsFromCollection
public <TResponse> GetRecordsFromCollectionResponse<TResponse> getRecordsFromCollection(Object typeDescriptor, GetRecordsFromCollectionRequest request) throws GPUdbException
Retrieves records from a collection. The operation can optionally return the record IDs which can be used in certain queries such asdeleteRecords.This operation supports paging through the data via the
offsetandlimitparameters.Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)
- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
typeDescriptor- Type descriptor used for decoding returned objects.request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
IllegalArgumentException- iftypeDescriptoris not an instance of one of the following:Type,TypeObjectMap,Schema, or aClassthat implementsIndexedRecordGPUdbException- if an error occurs during the operation.
-
getRecordsFromCollection
public <TResponse> GetRecordsFromCollectionResponse<TResponse> getRecordsFromCollection(Object typeDescriptor, String tableName, long offset, long limit, Map<String,String> options) throws GPUdbException
Retrieves records from a collection. The operation can optionally return the record IDs which can be used in certain queries such asdeleteRecords.This operation supports paging through the data via the
offsetandlimitparameters.Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)
- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
typeDescriptor- Type descriptor used for decoding returned objects.tableName- Name of the collection or table from which records are to be retrieved, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing collection or table.offset- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT.limit- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. Useoffsetandlimitto request subsequent pages of results. The default value is -9999.options-RETURN_RECORD_IDS: IfTRUEthen return the internal record ID along with each returned record. Supported values: The default value isFALSE.EXPRESSION: Optional filter expression to apply to the table. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
IllegalArgumentException- iftypeDescriptoris not an instance of one of the following:Type,TypeObjectMap,Schema, or aClassthat implementsIndexedRecordGPUdbException- if an error occurs during the operation.
-
getRecordsFromCollection
public <TResponse> GetRecordsFromCollectionResponse<TResponse> getRecordsFromCollection(GetRecordsFromCollectionRequest request) throws GPUdbException
Retrieves records from a collection. The operation can optionally return the record IDs which can be used in certain queries such asdeleteRecords.This operation supports paging through the data via the
offsetandlimitparameters.Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)
- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getRecordsFromCollection
public <TResponse> GetRecordsFromCollectionResponse<TResponse> getRecordsFromCollection(String tableName, long offset, long limit, Map<String,String> options) throws GPUdbException
Retrieves records from a collection. The operation can optionally return the record IDs which can be used in certain queries such asdeleteRecords.This operation supports paging through the data via the
offsetandlimitparameters.Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)
- Type Parameters:
TResponse- The type of object being retrieved.- Parameters:
tableName- Name of the collection or table from which records are to be retrieved, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing collection or table.offset- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT.limit- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. Useoffsetandlimitto request subsequent pages of results. The default value is -9999.options-RETURN_RECORD_IDS: IfTRUEthen return the internal record ID along with each returned record. Supported values: The default value isFALSE.EXPRESSION: Optional filter expression to apply to the table. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
getVectortile
public GetVectortileResponse getVectortile(GetVectortileRequest request) throws GPUdbException
- Throws:
GPUdbException
-
getVectortile
public GetVectortileResponse getVectortile(List<String> tableNames, List<String> columnNames, Map<String,List<String>> layers, int tileX, int tileY, int zoom, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
grantPermission
public GrantPermissionResponse grantPermission(GrantPermissionRequest request) throws GPUdbException
Grant user or role the specified permission on the specified object.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermission
public GrantPermissionResponse grantPermission(String principal, String object, String objectType, String permission, Map<String,String> options) throws GPUdbException
Grant user or role the specified permission on the specified object.- Parameters:
principal- Name of the user or role for which the permission is being granted. Must be an existing user or role. The default value is ''.object- Name of object permission is being granted to. It is recommended to use a fully-qualified name when possible.objectType- The type of object being granted to. Supported values:CATALOG: CatalogCONTEXT: ContextCREDENTIAL: CredentialDATASINK: Data SinkDATASOURCE: Data SourceDIRECTORY: KIFS File DirectoryGRAPH: A Graph objectPROC: UDF ProcedureSCHEMA: SchemaSQL_PROC: SQL ProcedureSYSTEM: System-level accessTABLE: Database TableTABLE_MONITOR: Table monitor
permission- Permission being granted. Supported values:ADMIN: Full read/write and administrative access on the object.CONNECT: Connect access on the given data source or data sink.CREATE: Ability to create new objects of this type.DELETE: Delete rows from tables.EXECUTE: Ability to Execute the Procedure object.INSERT: Insert access to tables.MONITOR: Monitor logs and statistics.READ: Ability to read, list and use the object.SEND_ALERT: Ability to send system alerts.UPDATE: Update access to the table.USER_ADMIN: Access to administer users and roles that do not have system_admin permission.WRITE: Access to write, change and delete objects.
options- Optional parameters.COLUMNS: Apply table security to these columns, comma-separated. The default value is ''.FILTER_EXPRESSION: Optional filter expression to apply to this grant. Only rows that match the filter will be affected. The default value is ''.WITH_GRANT_OPTION: Allow the recipient to grant the same permission (or subset) to others. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionCredential
public GrantPermissionCredentialResponse grantPermissionCredential(GrantPermissionCredentialRequest request) throws GPUdbException
Grants a credential-level permission to a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionCredential
public GrantPermissionCredentialResponse grantPermissionCredential(String name, String permission, String credentialName, Map<String,String> options) throws GPUdbException
Grants a credential-level permission to a user or role.- Parameters:
name- Name of the user or role to which the permission will be granted. Must be an existing user or role.permission- Permission to grant to the user or role. Supported values:CREDENTIAL_ADMIN: Full read/write and administrative access on the credential.CREDENTIAL_READ: Ability to read and use the credential.
credentialName- Name of the credential on which the permission will be granted. Must be an existing credential, or an empty string to grant access on all credentials.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionDatasource
public GrantPermissionDatasourceResponse grantPermissionDatasource(GrantPermissionDatasourceRequest request) throws GPUdbException
Grants a data source permission to a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionDatasource
public GrantPermissionDatasourceResponse grantPermissionDatasource(String name, String permission, String datasourceName, Map<String,String> options) throws GPUdbException
Grants a data source permission to a user or role.- Parameters:
name- Name of the user or role to which the permission will be granted. Must be an existing user or role.permission- Permission to grant to the user or role. Supported values:datasourceName- Name of the data source on which the permission will be granted. Must be an existing data source, or an empty string to grant permission on all data sources.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionDirectory
public GrantPermissionDirectoryResponse grantPermissionDirectory(GrantPermissionDirectoryRequest request) throws GPUdbException
Grants a KiFS directory-level permission to a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionDirectory
public GrantPermissionDirectoryResponse grantPermissionDirectory(String name, String permission, String directoryName, Map<String,String> options) throws GPUdbException
Grants a KiFS directory-level permission to a user or role.- Parameters:
name- Name of the user or role to which the permission will be granted. Must be an existing user or role.permission- Permission to grant to the user or role. Supported values:DIRECTORY_READ: For files in the directory, access to list files, download files, or use files in server side functionsDIRECTORY_WRITE: Access to upload files to, or delete files from, the directory. A user or role with write access automatically has read access
directoryName- Name of the KiFS directory to which the permission grants access. An empty directory name grants access to all KiFS directoriesoptions- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionProc
public GrantPermissionProcResponse grantPermissionProc(GrantPermissionProcRequest request) throws GPUdbException
Grants a proc-level permission to a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionProc
public GrantPermissionProcResponse grantPermissionProc(String name, String permission, String procName, Map<String,String> options) throws GPUdbException
Grants a proc-level permission to a user or role.- Parameters:
name- Name of the user or role to which the permission will be granted. Must be an existing user or role.permission- Permission to grant to the user or role. Supported values:PROC_ADMIN: Admin access to the proc.PROC_EXECUTE: Execute access to the proc.
procName- Name of the proc to which the permission grants access. Must be an existing proc, or an empty string to grant access to all procs.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionSystem
public GrantPermissionSystemResponse grantPermissionSystem(GrantPermissionSystemRequest request) throws GPUdbException
Grants a system-level permission to a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionSystem
public GrantPermissionSystemResponse grantPermissionSystem(String name, String permission, Map<String,String> options) throws GPUdbException
Grants a system-level permission to a user or role.- Parameters:
name- Name of the user or role to which the permission will be granted. Must be an existing user or role.permission- Permission to grant to the user or role. Supported values:SYSTEM_ADMIN: Full access to all data and system functions.SYSTEM_USER_ADMIN: Access to administer users and roles that do not have system_admin permission.SYSTEM_WRITE: Read and write access to all tables.SYSTEM_READ: Read-only access to all tables.SYSTEM_SEND_ALERT: Send system alerts.
options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionTable
public GrantPermissionTableResponse grantPermissionTable(GrantPermissionTableRequest request) throws GPUdbException
Grants a table-level permission to a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantPermissionTable
public GrantPermissionTableResponse grantPermissionTable(String name, String permission, String tableName, String filterExpression, Map<String,String> options) throws GPUdbException
Grants a table-level permission to a user or role.- Parameters:
name- Name of the user or role to which the permission will be granted. Must be an existing user or role.permission- Permission to grant to the user or role. Supported values:TABLE_ADMIN: Full read/write and administrative access to the table.TABLE_INSERT: Insert access to the table.TABLE_UPDATE: Update access to the table.TABLE_DELETE: Delete access to the table.TABLE_READ: Read access to the table.
tableName- Name of the table to which the permission grants access, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table, view, or schema. If a schema, the permission also applies to tables and views in the schema.filterExpression- Optional filter expression to apply to this grant. Only rows that match the filter will be affected. The default value is ''.options- Optional parameters.COLUMNS: Apply security to these columns, comma-separated. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantRole
public GrantRoleResponse grantRole(GrantRoleRequest request) throws GPUdbException
Grants membership in a role to a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
grantRole
public GrantRoleResponse grantRole(String role, String member, Map<String,String> options) throws GPUdbException
Grants membership in a role to a user or role.- Parameters:
role- Name of the role in which membership will be granted. Must be an existing role.member- Name of the user or role that will be granted membership inrole. Must be an existing user or role.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasPermission
public HasPermissionResponse hasPermission(HasPermissionRequest request) throws GPUdbException
Checks if the specified user has the specified permission on the specified object.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasPermission
public HasPermissionResponse hasPermission(String principal, String object, String objectType, String permission, Map<String,String> options) throws GPUdbException
Checks if the specified user has the specified permission on the specified object.- Parameters:
principal- Name of the user for which the permission is being checked. Must be an existing user. If blank, will use the current user. The default value is ''.object- Name of object to check for the requested permission. It is recommended to use a fully-qualified name when possible.objectType- The type of object being checked. Supported values:CATALOG: External CatalogCONTEXT: ContextCREDENTIAL: CredentialDATASINK: Data SinkDATASOURCE: Data SourceDIRECTORY: KiFS File DirectoryGRAPH: A Graph objectPROC: UDF ProcedureSCHEMA: SchemaSQL_PROC: SQL ProcedureSYSTEM: System-level accessTABLE: Database TableTABLE_MONITOR: Table monitor
permission- Permission to check for. Supported values:ADMIN: Full read/write and administrative access on the object.CONNECT: Connect access on the given data source or data sink.CREATE: Ability to create new objects of this type.DELETE: Delete rows from tables.EXECUTE: Ability to Execute the Procedure object.INSERT: Insert access to tables.MONITOR: Monitor logs and statistics.READ: Ability to read, list and use the object.SEND_ALERT: Ability to send system alerts.UPDATE: Update access to the table.USER_ADMIN: Access to administer users and roles that do not have system_admin permission.WRITE: Access to write, change and delete objects.
options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfFALSEwill return an error if the providedobjectdoes not exist or is blank. IfTRUEthen it will returnFALSEforhasPermission. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasProc
public HasProcResponse hasProc(HasProcRequest request) throws GPUdbException
Checks the existence of a proc with the given name.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasProc
public HasProcResponse hasProc(String procName, Map<String,String> options) throws GPUdbException
Checks the existence of a proc with the given name.- Parameters:
procName- Name of the proc to check for existence.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasRole
public HasRoleResponse hasRole(HasRoleRequest request) throws GPUdbException
Checks if the specified user has the specified role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasRole
public HasRoleResponse hasRole(String principal, String role, Map<String,String> options) throws GPUdbException
Checks if the specified user has the specified role.- Parameters:
principal- Name of the user for which role membership is being checked. Must be an existing user. If blank, will use the current user. The default value is ''.role- Name of role to check for membership.options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfFALSEwill return an error if the providedroledoes not exist or is blank. IfTRUEthen it will returnFALSEforhasRole. Supported values: The default value isFALSE.ONLY_DIRECT: IfFALSEwill search recursively if theprincipalis a member ofrole. IfTRUEthenprincipalmust directly be a member ofrole. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasSchema
public HasSchemaResponse hasSchema(HasSchemaRequest request) throws GPUdbException
Checks for the existence of a schema with the given name.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasSchema
public HasSchemaResponse hasSchema(String schemaName, Map<String,String> options) throws GPUdbException
Checks for the existence of a schema with the given name.- Parameters:
schemaName- Name of the schema to check for existence, in root, using standard name resolution rules.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasTable
public HasTableResponse hasTable(HasTableRequest request) throws GPUdbException
Checks for the existence of a table with the given name.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasTable
public HasTableResponse hasTable(String tableName, Map<String,String> options) throws GPUdbException
Checks for the existence of a table with the given name.- Parameters:
tableName- Name of the table to check for existence, in [schema_name.]table_name format, using standard name resolution rules.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasType
public HasTypeResponse hasType(HasTypeRequest request) throws GPUdbException
Check for the existence of a type.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
hasType
public HasTypeResponse hasType(String typeId, Map<String,String> options) throws GPUdbException
Check for the existence of a type.- Parameters:
typeId- Id of the type returned in response tocreateTyperequest.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
importModel
public ImportModelResponse importModel(ImportModelRequest request) throws GPUdbException
- Throws:
GPUdbException
-
importModel
public ImportModelResponse importModel(String modelName, String registryName, String container, String runFunction, String modelType, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
insertRecordsRaw
public InsertRecordsResponse insertRecordsRaw(RawInsertRecordsRequest request) throws GPUdbException
Adds multiple records to the specified table. The operation is synchronous, meaning that a response will not be returned until all the records are fully inserted and available. The response payload provides the counts of the number of records actually inserted and/or updated, and can provide the unique identifier of each added record.The
optionsparameter can be used to customize this function's behavior.The
UPDATE_ON_EXISTING_PKoption specifies the record collision policy for inserting into a table with a primary key, but is ignored if no primary key exists.The
RETURN_RECORD_IDSoption indicates that the database should return the unique identifiers of inserted records.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertRecords
public <TRequest> InsertRecordsResponse insertRecords(InsertRecordsRequest<TRequest> request) throws GPUdbException
Adds multiple records to the specified table. The operation is synchronous, meaning that a response will not be returned until all the records are fully inserted and available. The response payload provides the counts of the number of records actually inserted and/or updated, and can provide the unique identifier of each added record.The
optionsparameter can be used to customize this function's behavior.The
UPDATE_ON_EXISTING_PKoption specifies the record collision policy for inserting into a table with a primary key, but is ignored if no primary key exists.The
RETURN_RECORD_IDSoption indicates that the database should return the unique identifiers of inserted records.- Type Parameters:
TRequest- The type of object being added.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertRecords
public <TRequest> InsertRecordsResponse insertRecords(TypeObjectMap<TRequest> typeObjectMap, InsertRecordsRequest<TRequest> request) throws GPUdbException
Adds multiple records to the specified table. The operation is synchronous, meaning that a response will not be returned until all the records are fully inserted and available. The response payload provides the counts of the number of records actually inserted and/or updated, and can provide the unique identifier of each added record.The
optionsparameter can be used to customize this function's behavior.The
UPDATE_ON_EXISTING_PKoption specifies the record collision policy for inserting into a table with a primary key, but is ignored if no primary key exists.The
RETURN_RECORD_IDSoption indicates that the database should return the unique identifiers of inserted records.- Type Parameters:
TRequest- The type of object being added.- Parameters:
typeObjectMap- Type object map used for encoding input objects.request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
IllegalArgumentException- iftypeObjectMapis not an instance of one of the following:Type,TypeObjectMap,Schema, or aClassthat implementsIndexedRecordGPUdbException- if an error occurs during the operation.
-
insertRecords
public <TRequest> InsertRecordsResponse insertRecords(String tableName, List<TRequest> data, Map<String,String> options) throws GPUdbException
Adds multiple records to the specified table. The operation is synchronous, meaning that a response will not be returned until all the records are fully inserted and available. The response payload provides the counts of the number of records actually inserted and/or updated, and can provide the unique identifier of each added record.The
optionsparameter can be used to customize this function's behavior.The
UPDATE_ON_EXISTING_PKoption specifies the record collision policy for inserting into a table with a primary key, but is ignored if no primary key exists.The
RETURN_RECORD_IDSoption indicates that the database should return the unique identifiers of inserted records.- Type Parameters:
TRequest- The type of object being added.- Parameters:
tableName- Name of table to which the records are to be added, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.data- An array of binary-encoded data for the records to be added. All records must be of the same type as that of the table. Empty array iflistEncodingisJSON.options- Optional parameters.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set toFALSE, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined byIGNORE_EXISTING_PK,ALLOW_PARTIAL_BATCH, andRETURN_INDIVIDUAL_ERRORS. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. IfFALSE, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined byALLOW_PARTIAL_BATCHandRETURN_INDIVIDUAL_ERRORS. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore new records whose primary key values collide with those of existing recordsFALSE: Treat as errors any new records whose primary key values collide with those of existing records
FALSE.PK_CONFLICT_PREDICATE_HIGHER: The record with higher value for the column resolves the primary-key insert conflict. The default value is ''.PK_CONFLICT_PREDICATE_LOWER: The record with lower value for the column resolves the primary-key insert conflict. The default value is ''.RETURN_RECORD_IDS: IfTRUEthen return the internal record id along for each inserted record. Supported values: The default value isFALSE.TRUNCATE_STRINGS: If set toTRUE, any strings which are too long for their target charN string columns will be truncated to fit. Supported values: The default value isFALSE.RETURN_INDIVIDUAL_ERRORS: If set toTRUE, success will always be returned, and any errors found will be included in the info map. The "bad_record_indices" entry is a comma-separated list of bad records (0-based). And if so, there will also be an "error_N" entry for each record with an error, where N is the index (0-based). Supported values: The default value isFALSE.ALLOW_PARTIAL_BATCH: If set toTRUE, all correct records will be inserted and incorrect records will be rejected and reported. Otherwise, the entire batch will be rejected if any records are incorrect. Supported values: The default value isFALSE.DRY_RUN: If set toTRUE, no data will be saved and any errors will be returned. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertRecords
public <TRequest> InsertRecordsResponse insertRecords(TypeObjectMap<TRequest> typeObjectMap, String tableName, List<TRequest> data, Map<String,String> options) throws GPUdbException
Adds multiple records to the specified table. The operation is synchronous, meaning that a response will not be returned until all the records are fully inserted and available. The response payload provides the counts of the number of records actually inserted and/or updated, and can provide the unique identifier of each added record.The
optionsparameter can be used to customize this function's behavior.The
UPDATE_ON_EXISTING_PKoption specifies the record collision policy for inserting into a table with a primary key, but is ignored if no primary key exists.The
RETURN_RECORD_IDSoption indicates that the database should return the unique identifiers of inserted records.- Type Parameters:
TRequest- The type of object being added.- Parameters:
typeObjectMap- Type object map used for encoding input objects.tableName- Name of table to which the records are to be added, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table.data- An array of binary-encoded data for the records to be added. All records must be of the same type as that of the table. Empty array iflistEncodingisJSON.options- Optional parameters.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set toFALSE, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined byIGNORE_EXISTING_PK,ALLOW_PARTIAL_BATCH, andRETURN_INDIVIDUAL_ERRORS. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. IfFALSE, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined byALLOW_PARTIAL_BATCHandRETURN_INDIVIDUAL_ERRORS. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore new records whose primary key values collide with those of existing recordsFALSE: Treat as errors any new records whose primary key values collide with those of existing records
FALSE.PK_CONFLICT_PREDICATE_HIGHER: The record with higher value for the column resolves the primary-key insert conflict. The default value is ''.PK_CONFLICT_PREDICATE_LOWER: The record with lower value for the column resolves the primary-key insert conflict. The default value is ''.RETURN_RECORD_IDS: IfTRUEthen return the internal record id along for each inserted record. Supported values: The default value isFALSE.TRUNCATE_STRINGS: If set toTRUE, any strings which are too long for their target charN string columns will be truncated to fit. Supported values: The default value isFALSE.RETURN_INDIVIDUAL_ERRORS: If set toTRUE, success will always be returned, and any errors found will be included in the info map. The "bad_record_indices" entry is a comma-separated list of bad records (0-based). And if so, there will also be an "error_N" entry for each record with an error, where N is the index (0-based). Supported values: The default value isFALSE.ALLOW_PARTIAL_BATCH: If set toTRUE, all correct records will be inserted and incorrect records will be rejected and reported. Otherwise, the entire batch will be rejected if any records are incorrect. Supported values: The default value isFALSE.DRY_RUN: If set toTRUE, no data will be saved and any errors will be returned. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
IllegalArgumentException- iftypeObjectMapis not an instance of one of the following:Type,TypeObjectMap,Schema, or aClassthat implementsIndexedRecordGPUdbException- if an error occurs during the operation.
-
insertRecordsFromFiles
public InsertRecordsFromFilesResponse insertRecordsFromFiles(InsertRecordsFromFilesRequest request) throws GPUdbException
Reads from one or more files and inserts the data into a new or existing table. The source data can be located either in KiFS; on the cluster, accessible to the database; or remotely, accessible via a pre-defined external data source.For delimited text files, there are two loading schemes: positional and name-based. The name-based loading scheme is enabled when the file has a header present and
TEXT_HAS_HEADERis set toTRUE. In this scheme, the source file(s) field names must match the target table's column names exactly; however, the source file can have more fields than the target table has columns. IfERROR_HANDLINGis set toPERMISSIVE, the source file can have fewer fields than the target table has columns. If the name-based loading scheme is being used, names matching the file header's names may be provided toCOLUMNS_TO_LOADinstead of numbers, but ranges are not supported.Note: Due to data being loaded in parallel, there is no insertion order guaranteed. For tables with primary keys, in the case of a primary key collision, this means it is indeterminate which record will be inserted first and remain, while the rest of the colliding key records are discarded.
Returns once all files are processed.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertRecordsFromFiles
public InsertRecordsFromFilesResponse insertRecordsFromFiles(String tableName, List<String> filepaths, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options) throws GPUdbException
Reads from one or more files and inserts the data into a new or existing table. The source data can be located either in KiFS; on the cluster, accessible to the database; or remotely, accessible via a pre-defined external data source.For delimited text files, there are two loading schemes: positional and name-based. The name-based loading scheme is enabled when the file has a header present and
TEXT_HAS_HEADERis set toTRUE. In this scheme, the source file(s) field names must match the target table's column names exactly; however, the source file can have more fields than the target table has columns. IfERROR_HANDLINGis set toPERMISSIVE, the source file can have fewer fields than the target table has columns. If the name-based loading scheme is being used, names matching the file header's names may be provided toCOLUMNS_TO_LOADinstead of numbers, but ranges are not supported.Note: Due to data being loaded in parallel, there is no insertion order guaranteed. For tables with primary keys, in the case of a primary key collision, this means it is indeterminate which record will be inserted first and remain, while the rest of the colliding key records are discarded.
Returns once all files are processed.
- Parameters:
tableName- Name of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules. If the table does not exist, the table will be created using either an existingTYPE_IDor the type inferred from the file, and the new table name will have to meet standard table naming criteria.filepaths- A list of file paths from which data will be sourced; For paths in KiFS, use the URI prefix of kifs:// followed by the path to a file or directory. File matching by prefix is supported, e.g. kifs://dir/file would match dir/file_1 and dir/file_2. When prefix matching is used, the path must start with a full, valid KiFS directory name. If an external data source is specified inDATASOURCE_NAME, these file paths must resolve to accessible files at that data source location. Prefix matching is supported. If the data source is hdfs, prefixes must be aligned with directories, i.e. partial file names will not match. If no data source is specified, the files are assumed to be local to the database and must all be accessible to the gpudb user, residing on the path (or relative to the path) specified by the external files directory in the Kinetica configuration file. Wildcards (*) can be used to specify a group of files. Prefix matching is supported, the prefixes must be aligned with directories. If the first path ends in .tsv, the text delimiter will be defaulted to a tab character. If the first path ends in .psv, the text delimiter will be defaulted to a pipe character (|).modifyColumns- Not implemented yet. The default value is an emptyMap.createTableOptions- Options fromcreateTable, allowing the structure of the table to be defined independently of the data source, when creating the target table.TYPE_ID: ID of a currently registered type.NO_ERROR_IF_EXISTS: IfTRUE, prevents an error from occurring if the table already exists and is of the given type. If a table with the same name but a different type exists, it is still an error. Supported values: The default value isFALSE.IS_REPLICATED: Affects the distribution scheme for the table's data. IfTRUEand the given table has no explicit shard key defined, the table will be replicated. IfFALSE, the table will be sharded according to the shard key specified in the givenTYPE_ID, or randomly sharded, if no shard key is specified. Note that a type containing a shard key cannot be used to create a replicated table. Supported values: The default value isFALSE.FOREIGN_KEYS: Semicolon-separated list of foreign keys, of the format '(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]'.FOREIGN_SHARD_KEY: Foreign shard key of the format 'source_column references shard_by_column from target_table(primary_key_column)'.PARTITION_TYPE: Partitioning scheme to use. Supported values:RANGE: Use range partitioning.INTERVAL: Use interval partitioning.LIST: Use list partitioning.HASH: Use hash partitioning.SERIES: Use series partitioning.
PARTITION_KEYS: Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined byPARTITION_DEFINITIONS.PARTITION_DEFINITIONS: Comma-separated list of partition definitions, whose format depends on the choice ofPARTITION_TYPE. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example formats.IS_AUTOMATIC_PARTITION: IfTRUE, a new partition will be created for values which don't fall into an existing partition. Currently, only supported for list partitions. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified intableName.CHUNK_SIZE: Indicates the number of records per chunk to be used for this table.CHUNK_COLUMN_MAX_MEMORY: Indicates the target maximum data size for each column in a chunk to be used for this table.CHUNK_MAX_MEMORY: Indicates the target maximum data size for all columns in a chunk to be used for this table.IS_RESULT_TABLE: Indicates whether the table is a memory-only table. A result table cannot contain columns with text_search data-handling, and it will not be retained if the server is restarted. Supported values: The default value isFALSE.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for this table's columns.
Map.options- Optional parameters.BAD_RECORD_TABLE_NAME: Name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). WhenERROR_HANDLINGisABORT, bad records table is not populated.BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. The default value is '10000'.BAD_RECORD_TABLE_LIMIT_PER_INPUT: For subscriptions, a positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload. Default value will beBAD_RECORD_TABLE_LIMITand total size of the table per rank is limited toBAD_RECORD_TABLE_LIMIT.BATCH_SIZE: Number of records to insert per batch when inserting data. The default value is '50000'.COLUMN_FORMATS: For each target column specified, applies the column-property-bound format to the source data loaded into that column. Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, and datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'. SeeDEFAULT_COLUMN_FORMATSfor valid format syntax.COLUMNS_TO_LOAD: Specifies a comma-delimited list of columns from the source data to load. If more than one file is being loaded, this list applies to all files. Column numbers can be specified discretely or as a range. For example, a value of '5,7,1..3' will insert values from the fifth column in the source data into the first column in the target table, from the seventh column in the source data into the second column in the target table, and from the first through third columns in the source data into the third through fifth columns in the target table. If the source data contains a header, column names matching the file header names may be provided instead of column numbers. If the target table doesn't exist, the table will be created with the columns in this order. If the target table does exist with columns in a different order than the source data, this list can be used to match the order of the target table. For example, a value of 'C, B, A' will create a three column table with column C, followed by column B, followed by column A; or will insert those fields in that order into a table created with columns in that order. If the target table exists, the column names must match the source data field names for a name-mapping to be successful. Mutually exclusive withCOLUMNS_TO_SKIP.COLUMNS_TO_SKIP: Specifies a comma-delimited list of columns from the source data to skip. Mutually exclusive withCOLUMNS_TO_LOAD.COMPRESSION_TYPE: Source data compression type. Supported values:NONE: No compression.AUTO: Auto detect compression typeGZIP: gzip file compression.BZIP2: bzip2 file compression.
AUTO.DATASOURCE_NAME: Name of an existing external data source from which data file(s) specified infilepathswill be loadedDEFAULT_COLUMN_FORMATS: Specifies the default format to be applied to source data loaded into columns with the corresponding column property. Currently supported column properties include date, time, and datetime. This default column-property-bound format can be overridden by specifying a column property and format for a given target column inCOLUMN_FORMATS. For each specified annotation, the format will apply to all columns with that annotation unless a customCOLUMN_FORMATSfor that annotation is specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"ERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.IGNORE_BAD_RECORDS: Malformed records are skipped.ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
ABORT.FILE_TYPE: Specifies the type of the file(s) whose records will be inserted. Supported values:AVRO: Avro file formatDELIMITED_TEXT: Delimited text file format; e.g., CSV, TSV, PSV, etc.GDB: Esri/GDB file formatJSON: Json file formatPARQUET: Apache Parquet file formatSHAPEFILE: ShapeFile file format
DELIMITED_TEXT.FLATTEN_COLUMNS: Specifies how to handle nested columns. Supported values:TRUE: Break up nested columns to multiple columnsFALSE: Treat nested columns as json columns instead of flattening
FALSE.GDAL_CONFIGURATION_OPTIONS: Comma separated list of gdal conf options, for the specific requests: key=valueIGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. IfFALSE, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined byERROR_HANDLING. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore new records whose primary key values collide with those of existing recordsFALSE: Treat as errors any new records whose primary key values collide with those of existing records
FALSE.INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:FULL: Run a type inference on the source data (if needed) and ingestDRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode ofERROR_HANDLING.TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
FULL.KAFKA_CONSUMERS_PER_RANK: Number of Kafka consumer threads per rank (valid range 1-6). The default value is '1'.KAFKA_GROUP_ID: The group id to be used when consuming data from a Kafka topic (valid only for Kafka datasource subscriptions).KAFKA_OFFSET_RESET_POLICY: Policy to determine whether the Kafka data consumption starts either at earliest offset or latest offset. Supported values: The default value isEARLIEST.KAFKA_OPTIMISTIC_INGEST: Enable optimistic ingestion where Kafka topic offsets and table data are committed independently to achieve parallelism. Supported values: The default value isFALSE.KAFKA_SUBSCRIPTION_CANCEL_AFTER: Sets the Kafka subscription lifespan (in minutes). Expired subscription will be cancelled automatically.KAFKA_TYPE_INFERENCE_FETCH_TIMEOUT: Maximum time to collect Kafka messages before type inferencing on the set of them.LAYER: Geo files layer(s) name(s): comma separated.LOADING_MODE: Scheme for distributing the extraction and loading of data from the source data file(s). This option applies only when loading files that are local to the database. Supported values:HEAD: The head node loads all data. All files must be available to the head node.DISTRIBUTED_SHARED: The head node coordinates loading data by worker processes across all nodes from shared files available to all workers. NOTE: Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.DISTRIBUTED_LOCAL: A single worker process on each node loads all files that are available to it. This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data). NOTE: If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail. If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.
HEAD.LOCAL_TIME_OFFSET: Apply an offset to Avro local timestamp columns.MAX_RECORDS_TO_LOAD: Limit the number of records to load in this request: if this number is larger thanBATCH_SIZE, then the number of records loaded will be limited to the next whole number ofBATCH_SIZE(per working thread).NUM_TASKS_PER_RANK: Number of tasks for reading file per rank. Default will be system configuration parameter, external_file_reader_num_tasks.POLL_INTERVAL: IfTRUE, the number of seconds between attempts to load external files into the table. If zero, polling will be continuous as long as data is found. If no data is found, the interval will steadily increase to a maximum of 60 seconds. The default value is '0'.PRIMARY_KEYS: Comma separated list of column names to set as primary keys, when not specified in the type.SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_MAX_CONSECUTIVE_CONNECTION_FAILURES: Max records to skip due to SR connection failures, before failingMAX_CONSECUTIVE_INVALID_SCHEMA_FAILURE: Max records to skip due to schema related errors, before failingSCHEMA_REGISTRY_SCHEMA_NAME: Name of the Avro schema in the schema registry to use when reading Avro records.SHARD_KEYS: Comma separated list of column names to set as shard keys, when not specified in the type.SKIP_LINES: Skip a number of lines from the beginning of the file.START_OFFSETS: Starting offsets by partition to fetch from kafka. A comma separated list of partition:offset pairs.SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value isFALSE.TABLE_INSERT_MODE: Insertion scheme to use when inserting records from multiple shapefiles. Supported values:SINGLE: Insert all records into a single table.TABLE_PER_FILE: Insert records from each file into a new table corresponding to that file.
SINGLE.TEXT_COMMENT_STRING: Specifies the character string that should be interpreted as a comment line prefix in the source data. All lines in the data starting with the provided string are ignored. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '#'.TEXT_DELIMITER: Specifies the character delimiting field values in the source data and field names in the header (if present). ForDELIMITED_TEXTFILE_TYPEonly. The default value is ','.TEXT_ESCAPE_CHARACTER: Specifies the character that is used to escape other characters in the source data. An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by an escape character will be interpreted as the ASCII bell, backspace, form feed, line feed, carriage return, horizontal tab, and vertical tab, respectively. For example, the escape character followed by an 'n' will be interpreted as a newline within a field value. The escape character can also be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not. ForDELIMITED_TEXTFILE_TYPEonly.TEXT_HAS_HEADER: Indicates whether the source data contains a header row. ForDELIMITED_TEXTFILE_TYPEonly. Supported values: The default value isTRUE.TEXT_HEADER_PROPERTY_DELIMITER: Specifies the delimiter for column properties in the header row (if present). Cannot be set to same value asTEXT_DELIMITER. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '|'.TEXT_NULL_STRING: Specifies the character string that should be interpreted as a null value in the source data. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '\N'.TEXT_QUOTE_CHARACTER: Specifies the character that should be interpreted as a field value quoting character in the source data. The character must appear at beginning and end of field value to take effect. Delimiters within quoted fields are treated as literals and not delimiters. Within a quoted field, two consecutive quote characters will be interpreted as a single literal quote character, effectively escaping it. To not have a quote character, specify an empty string. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '"'.TEXT_SEARCH_COLUMNS: Add 'text_search' property to internally inferenced string columns. Comma separated list of column names or '*' for all columns. To add 'text_search' property only to string columns greater than or equal to a minimum size, also set theTEXT_SEARCH_MIN_COLUMN_LENGTHTEXT_SEARCH_MIN_COLUMN_LENGTH: Set the minimum column size for strings to apply the 'text_search' property to. Used only whenTEXT_SEARCH_COLUMNShas a value.TRIM_SPACE: If set toTRUE, remove leading or trailing space from fields. Supported values: The default value isFALSE.TRUNCATE_STRINGS: If set toTRUE, truncate string values that are longer than the column's type size. Supported values: The default value isFALSE.TRUNCATE_TABLE: If set toTRUE, truncates the table specified bytableNameprior to loading the file(s). Supported values: The default value isFALSE.TYPE_INFERENCE_MAX_RECORDS_READTYPE_INFERENCE_MODE: Optimize type inferencing for either speed or accuracy. Supported values:ACCURACY: Scans data to get exactly-typed and sized columns for all data scanned.SPEED: Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned
ACCURACY.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be 'upserted'). If set toFALSE, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined byIGNORE_EXISTING_PKandERROR_HANDLING. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertRecordsFromPayload
public InsertRecordsFromPayloadResponse insertRecordsFromPayload(InsertRecordsFromPayloadRequest request) throws GPUdbException
Reads from the given text-based or binary payload and inserts the data into a new or existing table. The table will be created if it doesn't already exist.Returns once all records are processed.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertRecordsFromPayload
public InsertRecordsFromPayloadResponse insertRecordsFromPayload(String tableName, String dataText, ByteBuffer dataBytes, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options) throws GPUdbException
Reads from the given text-based or binary payload and inserts the data into a new or existing table. The table will be created if it doesn't already exist.Returns once all records are processed.
- Parameters:
tableName- Name of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules. If the table does not exist, the table will be created using either an existingTYPE_IDor the type inferred from the payload, and the new table name will have to meet standard table naming criteria.dataText- Records formatted as delimited textdataBytes- Records formatted as binary datamodifyColumns- Not implemented yet. The default value is an emptyMap.createTableOptions- Options used when creating the target table. Includes type to use. The other options match those increateTable.TYPE_ID: ID of a currently registered type. The default value is ''.NO_ERROR_IF_EXISTS: IfTRUE, prevents an error from occurring if the table already exists and is of the given type. If a table with the same ID but a different type exists, it is still an error. Supported values: The default value isFALSE.IS_REPLICATED: Affects the distribution scheme for the table's data. IfTRUEand the given type has no explicit shard key defined, the table will be replicated. IfFALSE, the table will be sharded according to the shard key specified in the givenTYPE_ID, or randomly sharded, if no shard key is specified. Note that a type containing a shard key cannot be used to create a replicated table. Supported values: The default value isFALSE.FOREIGN_KEYS: Semicolon-separated list of foreign keys, of the format '(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]'.FOREIGN_SHARD_KEY: Foreign shard key of the format 'source_column references shard_by_column from target_table(primary_key_column)'.PARTITION_TYPE: Partitioning scheme to use. Supported values:RANGE: Use range partitioning.INTERVAL: Use interval partitioning.LIST: Use list partitioning.HASH: Use hash partitioning.SERIES: Use series partitioning.
PARTITION_KEYS: Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined byPARTITION_DEFINITIONS.PARTITION_DEFINITIONS: Comma-separated list of partition definitions, whose format depends on the choice ofPARTITION_TYPE. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example formats.IS_AUTOMATIC_PARTITION: IfTRUE, a new partition will be created for values which don't fall into an existing partition. Currently only supported for list partitions. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified intableName.CHUNK_SIZE: Indicates the number of records per chunk to be used for this table.CHUNK_COLUMN_MAX_MEMORY: Indicates the target maximum data size for each column in a chunk to be used for this table.CHUNK_MAX_MEMORY: Indicates the target maximum data size for all columns in a chunk to be used for this table.IS_RESULT_TABLE: Indicates whether the table is a memory-only table. A result table cannot contain columns with text_search data-handling, and it will not be retained if the server is restarted. Supported values: The default value isFALSE.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for this table's columns.
Map.options- Optional parameters.BAD_RECORD_TABLE_NAME: Optional name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string).BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. Default value is 10000BAD_RECORD_TABLE_LIMIT_PER_INPUT: For subscriptions: A positive integer indicating the maximum number of records that can be written to the bad-record-table per file/payload. Default value will be 'bad_record_table_limit' and total size of the table per rank is limited to 'bad_record_table_limit'BATCH_SIZE: Internal tuning parameter--number of records per batch when inserting data.COLUMN_FORMATS: For each target column specified, applies the column-property-bound format to the source data loaded into that column. Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, and datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., '{ "order_date" : { "date" : "%Y.%m.%d" }, "order_time" : { "time" : "%H:%M:%S" } }'. SeeDEFAULT_COLUMN_FORMATSfor valid format syntax.COLUMNS_TO_LOAD: Specifies a comma-delimited list of columns from the source data to load. If more than one file is being loaded, this list applies to all files. Column numbers can be specified discretely or as a range. For example, a value of '5,7,1..3' will insert values from the fifth column in the source data into the first column in the target table, from the seventh column in the source data into the second column in the target table, and from the first through third columns in the source data into the third through fifth columns in the target table. If the source data contains a header, column names matching the file header names may be provided instead of column numbers. If the target table doesn't exist, the table will be created with the columns in this order. If the target table does exist with columns in a different order than the source data, this list can be used to match the order of the target table. For example, a value of 'C, B, A' will create a three column table with column C, followed by column B, followed by column A; or will insert those fields in that order into a table created with columns in that order. If the target table exists, the column names must match the source data field names for a name-mapping to be successful. Mutually exclusive withCOLUMNS_TO_SKIP.COLUMNS_TO_SKIP: Specifies a comma-delimited list of columns from the source data to skip. Mutually exclusive withCOLUMNS_TO_LOAD.COMPRESSION_TYPE: Optional: payload compression type. Supported values:NONE: UncompressedAUTO: Default. Auto detect compression typeGZIP: gzip file compression.BZIP2: bzip2 file compression.
AUTO.DEFAULT_COLUMN_FORMATS: Specifies the default format to be applied to source data loaded into columns with the corresponding column property. Currently supported column properties include date, time, and datetime. This default column-property-bound format can be overridden by specifying a column property and format for a given target column inCOLUMN_FORMATS. For each specified annotation, the format will apply to all columns with that annotation unless a customCOLUMN_FORMATSfor that annotation is specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., '{ "date" : "%Y.%m.%d", "time" : "%H:%M:%S" }'. Column formats are specified as a string of control characters and plain text. The supported control characters are 'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow the Linux 'strptime()' specification, as well as 's', which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the 'date' annotation must include the 'Y', 'm', and 'd' control characters. Formats for the 'time' annotation must include the 'H', 'M', and either 'S' or 's' (but not both) control characters. Formats for the 'datetime' annotation meet both the 'date' and 'time' control character requirements. For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }' would be used to interpret text as "05/04/2000 12:12:11"ERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.IGNORE_BAD_RECORDS: Malformed records are skipped.ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
ABORT.FILE_TYPE: Specifies the type of the file(s) whose records will be inserted. Supported values:AVRO: Avro file formatDELIMITED_TEXT: Delimited text file format; e.g., CSV, TSV, PSV, etc.GDB: Esri/GDB file formatJSON: Json file formatPARQUET: Apache Parquet file formatSHAPEFILE: ShapeFile file format
DELIMITED_TEXT.FLATTEN_COLUMNS: Specifies how to handle nested columns. Supported values:TRUE: Break up nested columns to multiple columnsFALSE: Treat nested columns as json columns instead of flattening
FALSE.GDAL_CONFIGURATION_OPTIONS: Comma separated list of gdal conf options, for the specific requests: key=value. The default value is ''.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. IfFALSE, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined byERROR_HANDLING. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore new records whose primary key values collide with those of existing recordsFALSE: Treat as errors any new records whose primary key values collide with those of existing records
FALSE.INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:FULL: Run a type inference on the source data (if needed) and ingestDRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode ofERROR_HANDLING.TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
FULL.LAYER: Optional: geo files layer(s) name(s): comma separated. The default value is ''.LOADING_MODE: Scheme for distributing the extraction and loading of data from the source data file(s). This option applies only when loading files that are local to the database. Supported values:HEAD: The head node loads all data. All files must be available to the head node.DISTRIBUTED_SHARED: The head node coordinates loading data by worker processes across all nodes from shared files available to all workers. NOTE: Instead of existing on a shared source, the files can be duplicated on a source local to each host to improve performance, though the files must appear as the same data set from the perspective of all hosts performing the load.DISTRIBUTED_LOCAL: A single worker process on each node loads all files that are available to it. This option works best when each worker loads files from its own file system, to maximize performance. In order to avoid data duplication, either each worker performing the load needs to have visibility to a set of files unique to it (no file is visible to more than one node) or the target table needs to have a primary key (which will allow the worker to automatically deduplicate data). NOTE: If the target table doesn't exist, the table structure will be determined by the head node. If the head node has no files local to it, it will be unable to determine the structure and the request will fail. If the head node is configured to have no worker processes, no data strictly accessible to the head node will be loaded.
HEAD.LOCAL_TIME_OFFSET: For Avro local timestamp columnsMAX_RECORDS_TO_LOAD: Limit the number of records to load in this request: If this number is larger than a batch_size, then the number of records loaded will be limited to the next whole number of batch_size (per working thread). The default value is ''.NUM_TASKS_PER_RANK: Optional: number of tasks for reading file per rank. Default will be external_file_reader_num_tasksPOLL_INTERVAL: IfTRUE, the number of seconds between attempts to load external files into the table. If zero, polling will be continuous as long as data is found. If no data is found, the interval will steadily increase to a maximum of 60 seconds.PRIMARY_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.SCHEMA_REGISTRY_CONNECTION_RETRIES: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_CONNECTION_TIMEOUT: Confluent Schema registry connection timeout (in Secs)SCHEMA_REGISTRY_MAX_CONSECUTIVE_CONNECTION_FAILURES: Max records to skip due to SR connection failures, before failingMAX_CONSECUTIVE_INVALID_SCHEMA_FAILURE: Max records to skip due to schema related errors, before failingSCHEMA_REGISTRY_SCHEMA_NAME: Name of the Avro schema in the schema registry to use when reading Avro records.SHARD_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.SKIP_LINES: Skip a number of lines from the beginning of the file.SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value isFALSE.TABLE_INSERT_MODE: Optional: table_insert_mode. When inserting records from multiple files: if table_per_file then insert from each file into a new table. Currently supported only for shapefiles. Supported values: The default value isSINGLE.TEXT_COMMENT_STRING: Specifies the character string that should be interpreted as a comment line prefix in the source data. All lines in the data starting with the provided string are ignored. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '#'.TEXT_DELIMITER: Specifies the character delimiting field values in the source data and field names in the header (if present). ForDELIMITED_TEXTFILE_TYPEonly. The default value is ','.TEXT_ESCAPE_CHARACTER: Specifies the character that is used to escape other characters in the source data. An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by an escape character will be interpreted as the ASCII bell, backspace, form feed, line feed, carriage return, horizontal tab, and vertical tab, respectively. For example, the escape character followed by an 'n' will be interpreted as a newline within a field value. The escape character can also be used to escape the quoting character, and will be treated as an escape character whether it is within a quoted field value or not. ForDELIMITED_TEXTFILE_TYPEonly.TEXT_HAS_HEADER: Indicates whether the source data contains a header row. ForDELIMITED_TEXTFILE_TYPEonly. Supported values: The default value isTRUE.TEXT_HEADER_PROPERTY_DELIMITER: Specifies the delimiter for column properties in the header row (if present). Cannot be set to same value asTEXT_DELIMITER. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '|'.TEXT_NULL_STRING: Specifies the character string that should be interpreted as a null value in the source data. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '\N'.TEXT_QUOTE_CHARACTER: Specifies the character that should be interpreted as a field value quoting character in the source data. The character must appear at beginning and end of field value to take effect. Delimiters within quoted fields are treated as literals and not delimiters. Within a quoted field, two consecutive quote characters will be interpreted as a single literal quote character, effectively escaping it. To not have a quote character, specify an empty string. ForDELIMITED_TEXTFILE_TYPEonly. The default value is '"'.TEXT_SEARCH_COLUMNS: Add 'text_search' property to internally inferenced string columns. Comma separated list of column names or '*' for all columns. To add text_search property only to string columns of minimum size, set also the option 'text_search_min_column_length'TEXT_SEARCH_MIN_COLUMN_LENGTH: Set minimum column size. Used only when 'text_search_columns' has a value.TRIM_SPACE: If set toTRUE, remove leading or trailing space from fields. Supported values: The default value isFALSE.TRUNCATE_STRINGS: If set toTRUE, truncate string values that are longer than the column's type size. Supported values: The default value isFALSE.TRUNCATE_TABLE: If set toTRUE, truncates the table specified bytableNameprior to loading the file(s). Supported values: The default value isFALSE.TYPE_INFERENCE_MAX_RECORDS_READ: The default value is ''.TYPE_INFERENCE_MODE: optimize type inference for: Supported values:ACCURACY: Scans data to get exactly-typed and sized columns for all data scanned.SPEED: Scans data and picks the widest possible column types so that 'all' values will fit with minimum data scanned
ACCURACY.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set toFALSE, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined byIGNORE_EXISTING_PKandERROR_HANDLING. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertRecordsFromQuery
public InsertRecordsFromQueryResponse insertRecordsFromQuery(InsertRecordsFromQueryRequest request) throws GPUdbException
Computes remote query result and inserts the result data into a new or existing table- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertRecordsFromQuery
public InsertRecordsFromQueryResponse insertRecordsFromQuery(String tableName, String remoteQuery, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options) throws GPUdbException
Computes remote query result and inserts the result data into a new or existing table- Parameters:
tableName- Name of the table into which the data will be inserted, in [schema_name.]table_name format, using standard name resolution rules. If the table does not exist, the table will be created using either an existingTYPE_IDor the type inferred from the remote query, and the new table name will have to meet standard table naming criteria.remoteQuery- Query for which result data needs to be importedmodifyColumns- Not implemented yet. The default value is an emptyMap.createTableOptions- Options used when creating the target table.TYPE_ID: ID of a currently registered type. The default value is ''.NO_ERROR_IF_EXISTS: IfTRUE, prevents an error from occurring if the table already exists and is of the given type. If a table with the same ID but a different type exists, it is still an error. Supported values: The default value isFALSE.IS_REPLICATED: Affects the distribution scheme for the table's data. IfTRUEand the given type has no explicit shard key defined, the table will be replicated. IfFALSE, the table will be sharded according to the shard key specified in the givenTYPE_ID, or randomly sharded, if no shard key is specified. Note that a type containing a shard key cannot be used to create a replicated table. Supported values: The default value isFALSE.FOREIGN_KEYS: Semicolon-separated list of foreign keys, of the format '(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]'.FOREIGN_SHARD_KEY: Foreign shard key of the format 'source_column references shard_by_column from target_table(primary_key_column)'.PARTITION_TYPE: Partitioning scheme to use. Supported values:RANGE: Use range partitioning.INTERVAL: Use interval partitioning.LIST: Use list partitioning.HASH: Use hash partitioning.SERIES: Use series partitioning.
PARTITION_KEYS: Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined byPARTITION_DEFINITIONS.PARTITION_DEFINITIONS: Comma-separated list of partition definitions, whose format depends on the choice ofPARTITION_TYPE. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example formats.IS_AUTOMATIC_PARTITION: IfTRUE, a new partition will be created for values which don't fall into an existing partition. Currently only supported for list partitions. Supported values: The default value isFALSE.TTL: Sets the TTL of the table specified intableName.CHUNK_SIZE: Indicates the number of records per chunk to be used for this table.IS_RESULT_TABLE: Indicates whether the table is a memory-only table. A result table cannot contain columns with text_search data-handling, and it will not be retained if the server is restarted. Supported values: The default value isFALSE.STRATEGY_DEFINITION: The tier strategy for the table and its columns.COMPRESSION_CODEC: The default compression codec for this table's columns.
Map.options- Optional parameters.BAD_RECORD_TABLE_NAME: Optional name of a table to which records that were rejected are written. The bad-record-table has the following columns: line_number (long), line_rejected (string), error_message (string). When error handling is Abort, bad records table is not populated.BAD_RECORD_TABLE_LIMIT: A positive integer indicating the maximum number of records that can be written to the bad-record-table. Default value is 10000BATCH_SIZE: Number of records per batch when inserting data.DATASOURCE_NAME: Name of an existing external data source from which table will be loadedERROR_HANDLING: Specifies how errors should be handled upon insertion. Supported values:PERMISSIVE: Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.IGNORE_BAD_RECORDS: Malformed records are skipped.ABORT: Stops current insertion and aborts entire operation when an error is encountered. Primary key collisions are considered abortable errors in this mode.
ABORT.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for inserting into a table with a primary key, only used when not in upsert mode (upsert mode is disabled whenUPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record being inserted that is rejected for having primary key values that match those of an existing table record will be ignored with no error generated. IfFALSE, the rejection of any record for having primary key values matching an existing record will result in an error being reported, as determined byERROR_HANDLING. If the specified table does not have a primary key or if upsert mode is in effect (UPDATE_ON_EXISTING_PKisTRUE), then this option has no effect. Supported values:TRUE: Ignore new records whose primary key values collide with those of existing recordsFALSE: Treat as errors any new records whose primary key values collide with those of existing records
FALSE.INGESTION_MODE: Whether to do a full load, dry run, or perform a type inference on the source data. Supported values:FULL: Run a type inference on the source data (if needed) and ingestDRY_RUN: Does not load data, but walks through the source data and determines the number of valid records, taking into account the current mode ofERROR_HANDLING.TYPE_INFERENCE_ONLY: Infer the type of the source data and return, without ingesting any data. The inferred type is returned in the response.
FULL.JDBC_FETCH_SIZE: The JDBC fetch size, which determines how many rows to fetch per round trip.JDBC_SESSION_INIT_STATEMENT: Executes the statement per each jdbc session before doing actual load. The default value is ''.NUM_SPLITS_PER_RANK: Optional: number of splits for reading data per rank. Default will be external_file_reader_num_tasks. The default value is ''.NUM_TASKS_PER_RANK: Optional: number of tasks for reading data per rank. Default will be external_file_reader_num_tasksPRIMARY_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.SHARD_KEYS: Optional: comma separated list of column names, to set as primary keys, when not specified in the type. The default value is ''.SUBSCRIBE: Continuously poll the data source to check for new data and load it into the table. Supported values: The default value isFALSE.TRUNCATE_TABLE: If set toTRUE, truncates the table specified bytableNameprior to loading the data. Supported values: The default value isFALSE.REMOTE_QUERY: Remote SQL query from which data will be sourcedREMOTE_QUERY_ORDER_BY: Name of column to be used for splitting the query into multiple sub-queries using ordering of given column. The default value is ''.REMOTE_QUERY_FILTER_COLUMN: Name of column to be used for splitting the query into multiple sub-queries using the data distribution of given column. The default value is ''.REMOTE_QUERY_INCREASING_COLUMN: Column on subscribed remote query result that will increase for new records (e.g., TIMESTAMP). The default value is ''.REMOTE_QUERY_PARTITION_COLUMN: Alias name for remote_query_filter_column. The default value is ''.TRUNCATE_STRINGS: If set toTRUE, truncate string values that are longer than the column's type size. Supported values: The default value isFALSE.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for inserting into a table with a primary key. If set toTRUE, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record (the new data will be "upserted"). If set toFALSE, any existing table record with primary key values that match those of a record being inserted will remain unchanged, while the new record will be rejected and the error handled as determined byIGNORE_EXISTING_PKandERROR_HANDLING. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Upsert new records when primary keys match existing recordsFALSE: Reject new records when primary keys match existing records
FALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertRecordsRandom
public InsertRecordsRandomResponse insertRecordsRandom(InsertRecordsRandomRequest request) throws GPUdbException
Generates a specified number of random records and adds them to the given table. There is an optional parameter that allows the user to customize the ranges of the column values. It also allows the user to specify linear profiles for some or all columns in which case linear values are generated rather than random ones. Only individual tables are supported for this operation.This operation is synchronous, meaning that a response will not be returned until all random records are fully available.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertRecordsRandom
public InsertRecordsRandomResponse insertRecordsRandom(String tableName, long count, Map<String,Map<String,Double>> options) throws GPUdbException
Generates a specified number of random records and adds them to the given table. There is an optional parameter that allows the user to customize the ranges of the column values. It also allows the user to specify linear profiles for some or all columns in which case linear values are generated rather than random ones. Only individual tables are supported for this operation.This operation is synchronous, meaning that a response will not be returned until all random records are fully available.
- Parameters:
tableName- Table to which random records will be added, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table, not a view.count- Number of records to generate.options- Optional parameter to pass in specifications for the randomness of the values. This map is different from the *options* parameter of most other endpoints in that it is a map of string to map of string to doubles, while most others are maps of string to string. In this map, the top level keys represent which column's parameters are being specified, while the internal keys represents which parameter is being specified. These parameters take on different meanings depending on the type of the column. Below follows a more detailed description of the map:SEED: If provided, the internal random number generator will be initialized with the given value. The minimum is 0. This allows for the same set of random numbers to be generated across invocation of this endpoint in case the user wants to repeat the test. Sinceoptions, is a map of maps, we need an internal map to provide the seed value. For example, to pass 100 as the seed value through this parameter, you need something equivalent to: 'options' = {'seed': { 'value': 100 } }.VALUE: The seed value to use
ALL: This key indicates that the specifications relayed in the internal map are to be applied to all columns of the records.MIN: For numerical columns, the minimum of the generated values is set to this value. Default is -99999. For point, shape, and track columns, min for numeric 'x' and 'y' columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are -180.0 and -90.0. For the 'TIMESTAMP' column, the default minimum corresponds to Jan 1, 2010. For string columns, the minimum length of the randomly generated strings is set to this value (default is 0). If both minimum and maximum are provided, minimum must be less than or equal to max. If the min is outside the accepted ranges for strings columns and 'x' and 'y' columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use theALLparameter judiciously.MAX: For numerical columns, the maximum of the generated values is set to this value. Default is 99999. For point, shape, and track columns, max for numeric 'x' and 'y' columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are 180.0 and 90.0. For string columns, the maximum length of the randomly generated strings. If both minimum and maximum are provided, *max* must be greater than or equal to *min*. If the *max* is outside the accepted ranges for strings columns and 'x' and 'y' columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use theALLparameter judiciously.INTERVAL: If specified, generate values for all columns evenly spaced with the given interval value. If a max value is specified for a given column the data is randomly generated between min and max and decimated down to the interval. If no max is provided the data is linearly generated starting at the minimum value (instead of generating random data). For non-decimated string-type columns the interval value is ignored. Instead the values are generated following the pattern: 'attrname_creationIndex#', i.e. the column name suffixed with an underscore and a running counter (starting at 0). For string types with limited size (e.g. char4) the prefix is dropped. No nulls will be generated for nullable columns.NULL_PERCENTAGE: If specified, then generate the given percentage of the count as nulls for all nullable columns. This option will be ignored for non-nullable columns. The value must be within the range [0, 1.0]. The default value is 5% (0.05).CARDINALITY: If specified, limit the randomly generated values to a fixed set. Not allowed on a column with interval specified, and is not applicable to WKT or Track-specific columns. The value must be greater than 0. This option is disabled by default.
ATTR_NAME: Use the desired column name in place ofATTR_NAME, and set the following parameters for the column specified. This overrides any parameter set byALL.MIN: For numerical columns, the minimum of the generated values is set to this value. Default is -99999. For point, shape, and track columns, min for numeric 'x' and 'y' columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are -180.0 and -90.0. For the 'TIMESTAMP' column, the default minimum corresponds to Jan 1, 2010. For string columns, the minimum length of the randomly generated strings is set to this value (default is 0). If both minimum and maximum are provided, minimum must be less than or equal to max. If the min is outside the accepted ranges for strings columns and 'x' and 'y' columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use theALLparameter judiciously.MAX: For numerical columns, the maximum of the generated values is set to this value. Default is 99999. For point, shape, and track columns, max for numeric 'x' and 'y' columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are 180.0 and 90.0. For string columns, the maximum length of the randomly generated strings. If both minimum and maximum are provided, *max* must be greater than or equal to *min*. If the *max* is outside the accepted ranges for strings columns and 'x' and 'y' columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use theALLparameter judiciously.INTERVAL: If specified, generate values for all columns evenly spaced with the given interval value. If a max value is specified for a given column the data is randomly generated between min and max and decimated down to the interval. If no max is provided the data is linearly generated starting at the minimum value (instead of generating random data). For non-decimated string-type columns the interval value is ignored. Instead the values are generated following the pattern: 'attrname_creationIndex#', i.e. the column name suffixed with an underscore and a running counter (starting at 0). For string types with limited size (e.g. char4) the prefix is dropped. No nulls will be generated for nullable columns.NULL_PERCENTAGE: If specified and if this column is nullable, then generate the given percentage of the count as nulls. This option will result in an error if the column is not nullable. The value must be within the range [0, 1.0]. The default value is 5% (0.05).CARDINALITY: If specified, limit the randomly generated values to a fixed set. Not allowed on a column with interval specified, and is not applicable to WKT or Track-specific columns. The value must be greater than 0. This option is disabled by default.
TRACK_LENGTH: This key-map pair is only valid for track data sets (an error is thrown otherwise). No nulls would be generated for nullable columns.MIN: Minimum possible length for generated series; default is 100 records per series. Must be an integral value within the range [1, 500]. If both min and max are specified, min must be less than or equal to max. The minimum allowed value is 1. The maximum allowed value is 500.MAX: Maximum possible length for generated series; default is 500 records per series. Must be an integral value within the range [1, 500]. If both min and max are specified, max must be greater than or equal to min. The minimum allowed value is 1. The maximum allowed value is 500.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertSymbol
public InsertSymbolResponse insertSymbol(InsertSymbolRequest request) throws GPUdbException
Adds a symbol or icon (i.e. an image) to represent data points when data is rendered visually. Users must provide the symbol identifier (string), a format (currently supported: 'svg' and 'svg_path'), the data for the symbol, and any additional optional parameter (e.g. color). To have a symbol used for rendering create a table with a string column named 'SYMBOLCODE' (along with 'x' or 'y' for example). Then when the table is rendered (via WMS) if the 'dosymbology' parameter is 'true' then the value of the 'SYMBOLCODE' column is used to pick the symbol displayed for each point.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
insertSymbol
public InsertSymbolResponse insertSymbol(String symbolId, String symbolFormat, ByteBuffer symbolData, Map<String,String> options) throws GPUdbException
Adds a symbol or icon (i.e. an image) to represent data points when data is rendered visually. Users must provide the symbol identifier (string), a format (currently supported: 'svg' and 'svg_path'), the data for the symbol, and any additional optional parameter (e.g. color). To have a symbol used for rendering create a table with a string column named 'SYMBOLCODE' (along with 'x' or 'y' for example). Then when the table is rendered (via WMS) if the 'dosymbology' parameter is 'true' then the value of the 'SYMBOLCODE' column is used to pick the symbol displayed for each point.- Parameters:
symbolId- The id of the symbol being added. This is the same id that should be in the 'SYMBOLCODE' column for objects using this symbolsymbolFormat- Specifies the symbol format. Must be either 'svg' or 'svg_path'. Supported values:symbolData- The actual symbol data. IfsymbolFormatis 'svg' then this should be the raw bytes representing an svg file. IfsymbolFormatis svg path then this should be an svg path string, for example: 'M25.979,12.896,5.979,12.896,5.979,19.562,25.979,19.562z'options- Optional parameters.COLOR: IfsymbolFormatis 'svg' this is ignored. IfsymbolFormatis 'svg_path' then this option specifies the color (in RRGGBB hex format) of the path. For example, to have the path rendered in red, used 'FF0000'. If 'color' is not provided then '00FF00' (i.e. green) is used by default.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
killProc
public KillProcResponse killProc(KillProcRequest request) throws GPUdbException
Kills a running proc instance.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
killProc
public KillProcResponse killProc(String runId, Map<String,String> options) throws GPUdbException
Kills a running proc instance.- Parameters:
runId- The run ID of a running proc instance. If a proc with a matching run ID is not found or the proc instance has already completed, no procs will be killed. If not specified, all running proc instances will be killed. The default value is ''.options- Optional parameters.RUN_TAG: IfrunIdis specified, kill the proc instance that has a matching run ID and a matching run tag that was provided toexecuteProc. IfrunIdis not specified, kill the proc instance(s) where a matching run tag was provided toexecuteProc. The default value is ''.CLEAR_EXECUTE_AT_STARTUP: IfTRUE, kill and remove the instance of the proc matching the auto-start run ID that was created to run when the database is started. The auto-start run ID was returned fromexecuteProcand can be retrieved usingshowProc. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
listGraph
public ListGraphResponse listGraph(ListGraphRequest request) throws GPUdbException
- Throws:
GPUdbException
-
listGraph
public ListGraphResponse listGraph(String graphName, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
lockTable
public LockTableResponse lockTable(LockTableRequest request) throws GPUdbException
Manages global access to a table's data. By default a table has alockTypeofREAD_WRITE, indicating all operations are permitted. A user may request aREAD_ONLYor aWRITE_ONLYlock, after which only read or write operations, respectively, are permitted on the table until the lock is removed. WhenlockTypeisNO_ACCESSthen no operations are permitted on the table. The lock status can be queried by settinglockTypetoSTATUS.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
lockTable
public LockTableResponse lockTable(String tableName, String lockType, Map<String,String> options) throws GPUdbException
Manages global access to a table's data. By default a table has alockTypeofREAD_WRITE, indicating all operations are permitted. A user may request aREAD_ONLYor aWRITE_ONLYlock, after which only read or write operations, respectively, are permitted on the table until the lock is removed. WhenlockTypeisNO_ACCESSthen no operations are permitted on the table. The lock status can be queried by settinglockTypetoSTATUS.- Parameters:
tableName- Name of the table to be locked, in [schema_name.]table_name format, using standard name resolution rules. It must be a currently existing table or view.lockType- The type of lock being applied to the table. Setting it toSTATUSwill return the current lock status of the table without changing it. Supported values:STATUS: Show locked statusNO_ACCESS: Allow no read/write operationsREAD_ONLY: Allow only read operationsWRITE_ONLY: Allow only write operationsREAD_WRITE: Allow all read/write operations
STATUS.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
matchGraph
public MatchGraphResponse matchGraph(MatchGraphRequest request) throws GPUdbException
Matches a directed route implied by a given set of latitude/longitude points to an existing underlying road network graph using a given solution type.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
matchGraph
public MatchGraphResponse matchGraph(String graphName, List<String> samplePoints, String solveMethod, String solutionTable, Map<String,String> options) throws GPUdbException
Matches a directed route implied by a given set of latitude/longitude points to an existing underlying road network graph using a given solution type.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.
- Parameters:
graphName- Name of the underlying geospatial graph resource to match to usingsamplePoints.samplePoints- Sample points used to match to an underlying geospatial graph. Sample points must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with: existing column names, e.g., 'table.column AS SAMPLE_X'; expressions, e.g., 'ST_MAKEPOINT(table.x, table.y) AS SAMPLE_WKTPOINT'; or constant values, e.g., '{1, 2, 10} AS SAMPLE_TRIPID'.solveMethod- The type of solver to use for graph matching. Supported values:MARKOV_CHAIN: MatchessamplePointsto the graph using the Hidden Markov Model (HMM)-based method, which conducts a range-tree closest-edge search to find the best combinations of possible road segments (NUM_SEGMENTS) for each sample point to create the best route. The route is secured one point at a time while looking aheadCHAIN_WIDTHnumber of points, so the prediction is corrected after each point. This solution type is the most accurate but also the most computationally intensive. Related options:NUM_SEGMENTSandCHAIN_WIDTH.MATCH_OD_PAIRS: MatchessamplePointsto find the most probable path between origin and destination pairs with cost constraints.MATCH_SUPPLY_DEMAND: MatchessamplePointsto optimize scheduling multiple supplies (trucks) with varying sizes to varying demand sites with varying capacities per depot. Related options:PARTIAL_LOADINGandMAX_COMBINATIONS.MATCH_BATCH_SOLVES: MatchessamplePointssource and destination pairs for the shortest path solves in batch mode.MATCH_LOOPS: Matches closed loops (Eulerian paths) originating and ending at each graph node within min and max hops (levels).MATCH_CHARGING_STATIONS: Matches an optimal path across a number of ev-charging stations between source and target locations.MATCH_SIMILARITY: Matches the intersection set(s) by computing the Jaccard similarity score between node pairs.MATCH_PICKUP_DROPOFF: Matches the pickups and dropoffs by optimizing the total trip costsMATCH_CLUSTERS: Matches the graph nodes with a cluster index using Louvain clustering algorithmMATCH_PATTERN: Matches a pattern in the graphMATCH_EMBEDDING: Creates vector node embeddingsMATCH_ISOCHRONE: Solves for isochrones for a set of input sources
MARKOV_CHAIN.solutionTable- The name of the table used to store the results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. This table contains a track of geospatial points for the matched portion of the graph, a track ID, and a score value. Also outputs a details table containing a trip ID (that matches the track ID), the latitude/longitude pair, the timestamp the point was recorded at, and an edge ID corresponding to the matched road segment. Must not be an existing table of the same name. The default value is ''.options- Additional parameters.GPS_NOISE: GPS noise value (in meters) to remove redundant sample points. Use -1 to disable noise reduction. The default value accounts for 95% of point variation (+ or -5 meters). The default value is '5.0'.NUM_SEGMENTS: Maximum number of potentially matching road segments for each sample point. For theMARKOV_CHAINsolver, the default is 3. The default value is '3'.SEARCH_RADIUS: Maximum search radius used when snapping sample points onto potentially matching surrounding segments. The default value corresponds to approximately 100 meters. The default value is '0.001'.CHAIN_WIDTH: For theMARKOV_CHAINsolver only. Length of the sample points lookahead window within the Markov kernel; the larger the number, the more accurate the solution. The default value is '9'.SOURCE: Optional WKT starting point fromsamplePointsfor the solver. The default behavior for the endpoint is to use time to determine the starting point. The default value is 'POINT NULL'.DESTINATION: Optional WKT ending point fromsamplePointsfor the solver. The default behavior for the endpoint is to use time to determine the destination point. The default value is 'POINT NULL'.PARTIAL_LOADING: For theMATCH_SUPPLY_DEMANDsolver only. When false (non-default), trucks do not off-load at the demand (store) side if the remainder is less than the store's need. Supported values:TRUE: Partial off-loading at multiple store (demand) locationsFALSE: No partial off-loading allowed if supply is less than the store's demand.
TRUE.MAX_COMBINATIONS: For theMATCH_SUPPLY_DEMANDsolver only. This is the cutoff for the number of generated combinations for sequencing the demand locations - can increase this up to 2M. The default value is '10000'.MAX_SUPPLY_COMBINATIONS: For theMATCH_SUPPLY_DEMANDsolver only. This is the cutoff for the number of generated combinations for sequencing the supply locations if/when 'permute_supplies' is true. The default value is '10000'.LEFT_TURN_PENALTY: This will add an additional weight over the edges labeled as 'left turn' if the 'add_turn' option parameter of thecreateGraphwas invoked at graph creation. The default value is '0.0'.RIGHT_TURN_PENALTY: This will add an additional weight over the edges labeled as' right turn' if the 'add_turn' option parameter of thecreateGraphwas invoked at graph creation. The default value is '0.0'.INTERSECTION_PENALTY: This will add an additional weight over the edges labeled as 'intersection' if the 'add_turn' option parameter of thecreateGraphwas invoked at graph creation. The default value is '0.0'.SHARP_TURN_PENALTY: This will add an additional weight over the edges labeled as 'sharp turn' or 'u-turn' if the 'add_turn' option parameter of thecreateGraphwas invoked at graph creation. The default value is '0.0'.AGGREGATED_OUTPUT: For theMATCH_SUPPLY_DEMANDsolver only. When it is true (default), each record in the output table shows a particular truck's scheduled cumulative round trip path (MULTILINESTRING) and the corresponding aggregated cost. Otherwise, each record shows a single scheduled truck route (LINESTRING) towards a particular demand location (store id) with its corresponding cost. The default value is 'true'.OUTPUT_TRACKS: For theMATCH_SUPPLY_DEMANDsolver only. When it is true (non-default), the output will be in tracks format for all the round trips of each truck in which the timestamps are populated directly from the edge weights starting from their originating depots. The default value is 'false'.MAX_TRIP_COST: For theMATCH_SUPPLY_DEMANDandMATCH_PICKUP_DROPOFFsolvers only. If this constraint is greater than zero (default) then the trucks/rides will skip traveling from one demand/pick location to another if the cost between them is greater than this number (distance or time). Zero (default) value means no check is performed. The default value is '0.0'.FILTER_FOLDING_PATHS: For theMARKOV_CHAINsolver only. When true (non-default), the paths per sequence combination is checked for folding over patterns and can significantly increase the execution time depending on the chain width and the number of GPS samples. Supported values: The default value isFALSE.UNIT_UNLOADING_COST: For theMATCH_SUPPLY_DEMANDsolver only. The unit cost per load amount to be delivered. If this value is greater than zero (default) then the additional cost of this unit load multiplied by the total dropped load will be added over to the trip cost to the demand location. The default value is '0.0'.MAX_NUM_THREADS: For theMARKOV_CHAINsolver only. If specified (greater than zero), the maximum number of threads will not be greater than the specified value. It can be lower due to the memory and the number cores available. Default value of zero allows the algorithm to set the maximal number of threads within these constraints. The default value is '0'.SERVICE_LIMIT: For theMATCH_SUPPLY_DEMANDsolver only. If specified (greater than zero), any supply actor's total service cost (distance or time) will be limited by the specified value including multiple rounds (if set). The default value is '0.0'.ENABLE_REUSE: For theMATCH_SUPPLY_DEMANDsolver only. If specified (true), all supply actors can be scheduled for second rounds from their originating depots. Supported values:TRUE: Allows reusing supply actors (trucks, e.g.) for scheduling again.FALSE: Supply actors are scheduled only once from their depots.
FALSE.MAX_STOPS: For theMATCH_SUPPLY_DEMANDsolver only. If specified (greater than zero), a supply actor (truck) can at most have this many stops (demand locations) in one round trip. Otherwise, it is unlimited. If 'enable_truck_reuse' is on, this condition will be applied separately at each round trip use of the same truck. The default value is '0'.SERVICE_RADIUS: For theMATCH_SUPPLY_DEMANDandMATCH_PICKUP_DROPOFFsolvers only. If specified (greater than zero), it filters the demands/picks outside this radius centered around the supply actor/ride's originating location (distance or time). The default value is '0.0'.PERMUTE_SUPPLIES: For theMATCH_SUPPLY_DEMANDsolver only. If specified (true), supply side actors are permuted for the demand combinations during MSDO optimization - note that this option increases optimization time significantly - use of 'max_combinations' option is recommended to prevent prohibitively long runs. Supported values:TRUE: Generates sequences over supply side permutations if total supply is less than twice the total demandFALSE: Permutations are not performed, rather a specific order of supplies based on capacity is computed
TRUE.BATCH_TSM_MODE: For theMATCH_SUPPLY_DEMANDsolver only. When enabled, it sets the number of visits on each demand location by a single salesman at each trip is considered to be (one) 1, otherwise there is no bound. Supported values:TRUE: Sets only one visit per demand location by a salesman (TSM mode)FALSE: No preset limit (usual MSDO mode)
FALSE.ROUND_TRIP: For theMATCH_SUPPLY_DEMANDsolver only. When enabled, the supply will have to return back to the origination location. Supported values:TRUE: The optimization is done for trips in round trip manner always returning to originating locationsFALSE: Supplies do not have to come back to their originating locations in their routes. The routes are considered finished at the final dropoff.
TRUE.NUM_CYCLES: For theMATCH_CLUSTERSsolver only. Terminates the cluster exchange iterations across 2-step-cycles (outer loop) when quality does not improve during iterations. The default value is '10'.NUM_LOOPS_PER_CYCLE: For theMATCH_CLUSTERSandMATCH_EMBEDDINGsolvers only. Terminates the cluster exchanges within the first step iterations of a cycle (inner loop) unless convergence is reached. The default value is '10'.NUM_OUTPUT_CLUSTERS: For theMATCH_CLUSTERSsolver only. Limits the output to the top 'num_output_clusters' clusters based on density. Default value of zero outputs all clusters. The default value is '0'.MAX_NUM_CLUSTERS: For theMATCH_CLUSTERSandMATCH_EMBEDDINGsolvers only. If set (value greater than zero), it terminates when the number of clusters goes below than this number. For embedding solver the default is 8. The default value is '0'.CLUSTER_QUALITY_METRIC: For theMATCH_CLUSTERSsolver only. The quality metric for Louvain modularity optimization solver. Supported values:GIRVAN: Uses the Newman Girvan quality metric for cluster solverSPECTRAL: Applies recursive spectral bisection (RSB) partitioning solver
GIRVAN.RESTRICTED_TYPE: For theMATCH_SUPPLY_DEMANDsolver only. Optimization is performed by restricting routes labeled by 'MSDO_ODDEVEN_RESTRICTED' only for this supply actor (truck) type. Supported values:ODD: Applies odd/even rule restrictions to odd tagged vehicles.EVEN: Applies odd/even rule restrictions to even tagged vehicles.NONE: Does not apply odd/even rule restrictions to any vehicles.
NONE.SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to the server, amongst those containing the corresponding graph, that has the most computational bandwidth. The default value is ''.INVERSE_SOLVE: For theMATCH_BATCH_SOLVESsolver only. Solves source-destination pairs using inverse shortest path solver. Supported values: The default value isFALSE.MIN_LOOP_LEVEL: For theMATCH_LOOPSsolver only. Finds closed loops around each node deducible not less than this minimal hop (level) deep. The default value is '0'.MAX_LOOP_LEVEL: For theMATCH_LOOPSsolver only. Finds closed loops around each node deducible not more than this maximal hop (level) deep. The default value is '5'.SEARCH_LIMIT: For theMATCH_LOOPSsolver only. Searches within this limit of nodes per vertex to detect loops. The value zero means there is no limit. The default value is '10000'.OUTPUT_BATCH_SIZE: For theMATCH_LOOPSsolver only. Uses this value as the batch size of the number of loops in flushing(inserting) to the output table. The default value is '1000'.MULTI_STEP: For theMATCH_SUPPLY_DEMANDsolver only. Runs multiple supply demand solver repeatedly in a multi step cycle by switching supplies to demands until it reaches the main hub supply. Supported values: The default value isFALSE.CHARGING_CAPACITY: For theMATCH_CHARGING_STATIONSsolver only. This is the maximum ev-charging capacity of a vehicle (distance in meters or time in seconds depending on the unit of the graph weights). The default value is '300000.0'.CHARGING_CANDIDATES: For theMATCH_CHARGING_STATIONSsolver only. Solver searches for this many number of stations closest around each base charging location found by capacity. The default value is '10'.CHARGING_PENALTY: For theMATCH_CHARGING_STATIONSsolver only. This is the penalty for full charging. The default value is '30000.0'.MAX_HOPS: For theMATCH_SIMILARITYandMATCH_EMBEDDINGsolvers only. Searches within this maximum hops for source and target node pairs to compute the Jaccard scores. The default value is '3'.TRAVERSAL_NODE_LIMIT: For theMATCH_SIMILARITYsolver only. Limits the traversal depth if it reaches this many number of nodes. The default value is '1000'.PAIRED_SIMILARITY: For theMATCH_SIMILARITYsolver only. If true, it computes Jaccard score between each pair, otherwise it will compute Jaccard from the intersection set between the source and target nodes. Supported values: The default value isTRUE.FORCE_UNDIRECTED: For theMATCH_PATTERNandMATCH_EMBEDDINGsolvers only. Pattern matching will be using both pattern and graph as undirected if set to true. Supported values: The default value isFALSE.MAX_VECTOR_DIMENSION: For theMATCH_EMBEDDINGsolver only. Limits the number of dimensions in node vector embeddings. The default value is '1000'.OPTIMIZE_EMBEDDING_WEIGHTS: For theMATCH_EMBEDDINGsolvers only. Solves to find the optimal weights per sub feature in vector embeddings. Supported values: The default value isFALSE.EMBEDDING_WEIGHTS: For theMATCH_EMBEDDINGsolver only. User specified weights per sub feature in vector embeddings. The string contains the comma separated float values for each sub-feature in the vector space. These values will ONLY be used if 'optimize_embedding_weights' is false. The default value is '1.0,1.0,1.0,1.0'.OPTIMIZATION_SAMPLING_SIZE: For theMATCH_EMBEDDINGsolver only. Sets the number of random nodes from the graph for solving the weights using stochastic gradient descent. The default value is '1000'.OPTIMIZATION_MAX_ITERATIONS: For theMATCH_EMBEDDINGsolver only. When the iterations (epochs) for the convergence of the stochastic gradient descent algorithm reaches this number it bails out unless relative error between consecutive iterations is below the 'optimization_error_tolerance' option. The default value is '1000'.OPTIMIZATION_ERROR_TOLERANCE: For theMATCH_EMBEDDINGsolver only. When the relative error between all of the weights' consecutive iterations falls below this threshold the optimization cycle is interrupted unless the number of iterations reaches the limit set by the option 'max_optimization_iterations'. The default value is '0.001'.OPTIMIZATION_ITERATION_RATE: For theMATCH_EMBEDDINGsolver only. It is otherwise known as the learning rate, which is the proportionality constant in front of the gradient term in successive iterations. The default value is '0.3'.MAX_RADIUS: For theMATCH_ISOCHRONEsolver only. Sets the maximal reachability limit for computing isochrones. Zero means no limit. The default value is '0.0'.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
modifyGraph
public ModifyGraphResponse modifyGraph(ModifyGraphRequest request) throws GPUdbException
Update an existing graph network using given nodes, edges, weights, restrictions, and options.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, and Graph REST Tutorial before using this endpoint.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
modifyGraph
public ModifyGraphResponse modifyGraph(String graphName, List<String> nodes, List<String> edges, List<String> weights, List<String> restrictions, Map<String,String> options) throws GPUdbException
Update an existing graph network using given nodes, edges, weights, restrictions, and options.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, and Graph REST Tutorial before using this endpoint.
- Parameters:
graphName- Name of the graph resource to modify.nodes- Nodes with which to update existingnodesin graph specified bygraphName. Review Nodes for more information. Nodes must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS NODE_ID', expressions, e.g., 'ST_MAKEPOINT(column1, column2) AS NODE_WKTPOINT', or raw values, e.g., '{9, 10, 11} AS NODE_ID'. If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph.edges- Edges with which to update existingedgesin graph specified bygraphName. Review Edges for more information. Edges must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS EDGE_ID', expressions, e.g., 'SUBSTR(column, 1, 6) AS EDGE_NODE1_NAME', or raw values, e.g., "{'family', 'coworker'} AS EDGE_LABEL". If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph.weights- Weights with which to update existingweightsin graph specified bygraphName. Review Weights for more information. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS WEIGHTS_EDGE_ID', expressions, e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED', or raw values, e.g., '{4, 15} AS WEIGHTS_VALUESPECIFIED'. If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph.restrictions- Restrictions with which to update existingrestrictionsin graph specified bygraphName. Review Restrictions for more information. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS RESTRICTIONS_EDGE_ID', expressions, e.g., 'column/2 AS RESTRICTIONS_VALUECOMPARED', or raw values, e.g., '{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'. If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph.options- Optional parameters.RESTRICTION_THRESHOLD_VALUE: Value-based restriction comparison. Any node or edge with a RESTRICTIONS_VALUECOMPARED value greater than theRESTRICTION_THRESHOLD_VALUEwill not be included in the graph.EXPORT_CREATE_RESULTS: If set toTRUE, returns the graph topology in the response as arrays. Supported values: The default value isFALSE.ENABLE_GRAPH_DRAW: If set toTRUE, adds a 'EDGE_WKTLINE' column identifier to the specifiedGRAPH_TABLEso the graph can be viewed via WMS; for social and non-geospatial graphs, the 'EDGE_WKTLINE' column identifier will be populated with spatial coordinates derived from a flattening layout algorithm so the graph can still be viewed. Supported values: The default value isFALSE.SAVE_PERSIST: If set toTRUE, the graph will be saved in the persist directory (see the config reference for more information). If set toFALSE, the graph will be removed when the graph server is shutdown. Supported values: The default value isFALSE.ADD_TABLE_MONITOR: Adds a table monitor to every table used in the creation of the graph; this table monitor will trigger the graph to update dynamically upon inserts to the source table(s). Note that upon database restart, ifSAVE_PERSISTis also set toTRUE, the graph will be fully reconstructed and the table monitors will be reattached. For more details on table monitors, seecreateTableMonitor. Supported values: The default value isFALSE.GRAPH_TABLE: If specified, the created graph is also created as a table with the given name, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. This table will have the following identifier columns: 'EDGE_ID', 'EDGE_NODE1_ID', 'EDGE_NODE2_ID'. If left blank, no table is created. The default value is ''.REMOVE_LABEL_ONLY: When RESTRICTIONS on labeled entities requested, if set to true this will NOT delete the entity but only the label associated with the entity. Otherwise (default), it'll delete the label AND the entity. Supported values: The default value isFALSE.ADD_TURNS: Adds dummy 'pillowed' edges around intersection nodes where there are more than three edges so that additional weight penalties can be imposed by the solve endpoints. (increases the total number of edges). Supported values: The default value isFALSE.TURN_ANGLE: Value in degrees modifies the thresholds for attributing right, left, sharp turns, and intersections. It is the vertical deviation angle from the incoming edge to the intersection node. The larger the value, the larger the threshold for sharp turns and intersections; the smaller the value, the larger the threshold for right and left turns; 0 < turn_angle < 90. The default value is '60'.USE_RTREE: Use an range tree structure to accelerate and improve the accuracy of snapping, especially to edges. Supported values: The default value isTRUE.LABEL_DELIMITER: If provided the label string will be split according to this delimiter and each sub-string will be applied as a separate label onto the specified edge. The default value is ''.ALLOW_MULTIPLE_EDGES: Multigraph choice; allowing multiple edges with the same node pairs if set to true, otherwise, new edges with existing same node pairs will not be inserted. Supported values: The default value isTRUE.EMBEDDING_TABLE: If table exists (should be generated by the match/graph match_embedding solver), the vector embeddings for the newly inserted nodes will be appended into this table. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
queryGraph
public QueryGraphResponse queryGraph(QueryGraphRequest request) throws GPUdbException
Employs a topological query on a graph generated a-priori bycreateGraphand returns a list of adjacent edge(s) or node(s), also known as an adjacency list, depending on what's been provided to the endpoint; providing edges will return nodes and providing nodes will return edges.To determine the node(s) or edge(s) adjacent to a value from a given column, provide a list of values to
queries. This field can be populated with column values from any table as long as the type is supported by the given identifier. See Query Identifiers for more information.To return the adjacency list in the response, leave
adjacencyTableempty.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
queryGraph
public QueryGraphResponse queryGraph(String graphName, List<String> queries, List<String> restrictions, String adjacencyTable, int rings, Map<String,String> options) throws GPUdbException
Employs a topological query on a graph generated a-priori bycreateGraphand returns a list of adjacent edge(s) or node(s), also known as an adjacency list, depending on what's been provided to the endpoint; providing edges will return nodes and providing nodes will return edges.To determine the node(s) or edge(s) adjacent to a value from a given column, provide a list of values to
queries. This field can be populated with column values from any table as long as the type is supported by the given identifier. See Query Identifiers for more information.To return the adjacency list in the response, leave
adjacencyTableempty.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.
- Parameters:
graphName- Name of the graph resource to query.queries- Nodes or edges to be queried specified using query identifiers. Identifiers can be used with existing column names, e.g., 'table.column AS QUERY_NODE_ID', raw values, e.g., '{0, 2} AS QUERY_NODE_ID', or expressions, e.g., 'ST_MAKEPOINT(table.x, table.y) AS QUERY_NODE_WKTPOINT'. Multiple values can be provided as long as the same identifier is used for all values. If using raw values in an identifier combination, the number of values specified must match across the combination.restrictions- Additional restrictions to apply to the nodes/edges of an existing graph. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS RESTRICTIONS_EDGE_ID', expressions, e.g., 'column/2 AS RESTRICTIONS_VALUECOMPARED', or raw values, e.g., '{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'. If using raw values in an identifier combination, the number of values specified must match across the combination. The default value is an emptyList.adjacencyTable- Name of the table to store the resulting adjacencies, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. If left blank, the query results are instead returned in the response. If the 'QUERY_TARGET_NODE_LABEL' query identifier is used inqueries, then two additional columns will be available: 'PATH_ID' and 'RING_ID'. See Using Labels for more information. The default value is ''.rings- Sets the number of rings around the node to query for adjacency, with '1' being the edges directly attached to the queried node. Also known as number of hops. For example, if it is set to '2', the edge(s) directly attached to the queried node(s) will be returned; in addition, the edge(s) attached to the node(s) attached to the initial ring of edge(s) surrounding the queried node(s) will be returned. If the value is set to '0', any nodes that meet the criteria inqueriesandrestrictionswill be returned. This parameter is only applicable when querying nodes. The default value is 1.options- Additional parameters.FORCE_UNDIRECTED: If set toTRUE, all inbound edges and outbound edges relative to the node will be returned. If set toFALSE, only outbound edges relative to the node will be returned. This parameter is only applicable if the queried graphgraphNameis directed and when querying nodes. Consult Directed Graphs for more details. Supported values: The default value isFALSE.LIMIT: When specified (>0), limits the number of query results. The size of the nodes table will be limited by theLIMITvalue. The default value is '0'.OUTPUT_WKT_PATH: If true then concatenated wkt line segments will be added as the WKT column of the adjacency table. Supported values: The default value isFALSE.AND_LABELS: If set toTRUE, the result of the query has entities that satisfy all of the target labels, instead of any. Supported values: The default value isFALSE.SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to the server, amongst those containing the corresponding graph, that has the most computational bandwidth.OUTPUT_CHARN_LENGTH: When specified (>0 and <=256), limits the number of char length on the output tables for string based nodes. The default length is 64. The default value is '64'.FIND_COMMON_LABELS: If set to true, for many-to-many queries or multi-level traversals, it lists the common labels between the source and target nodes and edge labels in each path. Otherwise (zero rings), it'll list all labels of the node(s) queried. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
repartitionGraph
public RepartitionGraphResponse repartitionGraph(RepartitionGraphRequest request) throws GPUdbException
Rebalances an existing partitioned graph.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
repartitionGraph
public RepartitionGraphResponse repartitionGraph(String graphName, Map<String,String> options) throws GPUdbException
Rebalances an existing partitioned graph.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.
- Parameters:
graphName- Name of the graph resource to rebalance.options- Optional parameters.NEW_GRAPH_NAME: If a non-empty value is specified, the original graph will be kept (non-default behavior) and a new balanced graph will be created under this given name. When the value is empty (default), the generated 'balanced' graph will replace the original 'unbalanced' graph under the same graph name. The default value is ''.SOURCE_NODE: The distributed shortest path solve is run from this source node to all the nodes in the graph to create balanced partitions using the iso-distance levels of the solution. The source node is selected by the rebalance algorithm automatically (default case when the value is an empty string). Otherwise, the user specified node is used as the source. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
reserveResource
public ReserveResourceResponse reserveResource(ReserveResourceRequest request) throws GPUdbException
- Throws:
GPUdbException
-
reserveResource
public ReserveResourceResponse reserveResource(String component, String name, String action, long bytesRequested, long ownerId, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
restoreBackup
public RestoreBackupResponse restoreBackup(RestoreBackupRequest request) throws GPUdbException
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
restoreBackup
public RestoreBackupResponse restoreBackup(String backupName, Map<String,String> restoreObjectsMap, String datasourceName, Map<String,String> options) throws GPUdbException
- Parameters:
backupName- Name of the backup to restore from, which must refer to an existing backup. The default value is ''.restoreObjectsMap- Map of database objects to be restored from the backup.ALL: All object types and data contained in the given schemas(s).TABLE: Tables(s) and SQL view(s).CREDENTIAL: Credential(s).CONTEXT: Context(s).DATASINK: Data sink(s).DATASOURCE: Data source(s).STORED_PROCEDURE: SQL procedure(s).MONITOR: Table monitor(s) / SQL stream(s).USER: User(s) (internal and external) and associated permissions.ROLE: Role(s), role members (roles or users, recursively), and associated permissions.CONFIGURATION: IfTRUE, restore the database configuration file. Supported values: The default value isFALSE.
datasourceName- Data source through which the backup will be restored.options- Optional parameters.BACKUP_ID: ID of the snapshot to restore. Leave empty to restore the most recent snapshot in the backup. The default value is ''.RESTORE_POLICY: Behavior to apply when any database object to restore already exists. Supported values:NONE: If an object to be restored already exists with the same name, abort and return error.REPLACE: If an object to be restored already exists with the same name, replace it with the backup version.RENAME: If an object to be restored already exists with the same name, move that existing one to the schema specified byRENAMED_OBJECTS_SCHEMA.
NONE.RENAMED_OBJECTS_SCHEMA: If theRESTORE_POLICYisRENAME, use this schema for relocated existing objects instead of the default generated one. The default value is ''.CREATE_SCHEMA_IF_NOT_EXIST: Behavior to apply when the schema containing any database object to restore does not already exist. Supported values:TRUE: If the schema containing any restored object does not exist, create it automatically.FALSE: If the schema containing any restored object does not exist, return an error.
TRUE.REINGEST: Behavior to apply when restoring table data. Supported values:TRUE: Restore table data by re-ingesting it. This is the default behavior if the cluster topology differs from that of the contained backup.FALSE: Restore the persisted data files directly.
FALSE.DDL_ONLY: Behavior to apply when restoring tables. Supported values: The default value isFALSE.CHECKSUM: Whether or not to verify checksums for backup files when restoring. Supported values: The default value isFALSE.DRY_RUN: Whether or not to perform a dry run of the restoration operation. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermission
public RevokePermissionResponse revokePermission(RevokePermissionRequest request) throws GPUdbException
Revoke user or role the specified permission on the specified object.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermission
public RevokePermissionResponse revokePermission(String principal, String object, String objectType, String permission, Map<String,String> options) throws GPUdbException
Revoke user or role the specified permission on the specified object.- Parameters:
principal- Name of the user or role for which the permission is being revoked. Must be an existing user or role. The default value is ''.object- Name of object permission is being revoked from. It is recommended to use a fully-qualified name when possible.objectType- The type of object being revoked. Supported values:CATALOG: CatalogCONTEXT: ContextCREDENTIAL: CredentialDATASINK: Data SinkDATASOURCE: Data SourceDIRECTORY: KIFS File DirectoryGRAPH: A Graph objectPROC: UDF ProcedureSCHEMA: SchemaSQL_PROC: SQL ProcedureSYSTEM: System-level accessTABLE: Database TableTABLE_MONITOR: Table monitor
permission- Permission being revoked. Supported values:ADMIN: Full read/write and administrative access on the object.CONNECT: Connect access on the given data source or data sink.CREATE: Ability to create new objects of this type.DELETE: Delete rows from tables.EXECUTE: Ability to Execute the Procedure object.INSERT: Insert access to tables.MONITOR: Monitor logs and statistics.READ: Ability to read, list and use the object.SEND_ALERT: Ability to send system alerts.UPDATE: Update access to the table.USER_ADMIN: Access to administer users and roles that do not have system_admin permission.WRITE: Access to write, change and delete objects.
options- Optional parameters.COLUMNS: Revoke table security from these columns, comma-separated. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionCredential
public RevokePermissionCredentialResponse revokePermissionCredential(RevokePermissionCredentialRequest request) throws GPUdbException
Revokes a credential-level permission from a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionCredential
public RevokePermissionCredentialResponse revokePermissionCredential(String name, String permission, String credentialName, Map<String,String> options) throws GPUdbException
Revokes a credential-level permission from a user or role.- Parameters:
name- Name of the user or role from which the permission will be revoked. Must be an existing user or role.permission- Permission to revoke from the user or role. Supported values:CREDENTIAL_ADMIN: Full read/write and administrative access on the credential.CREDENTIAL_READ: Ability to read and use the credential.
credentialName- Name of the credential on which the permission will be revoked. Must be an existing credential, or an empty string to revoke access on all credentials.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionDatasource
public RevokePermissionDatasourceResponse revokePermissionDatasource(RevokePermissionDatasourceRequest request) throws GPUdbException
Revokes a data source permission from a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionDatasource
public RevokePermissionDatasourceResponse revokePermissionDatasource(String name, String permission, String datasourceName, Map<String,String> options) throws GPUdbException
Revokes a data source permission from a user or role.- Parameters:
name- Name of the user or role from which the permission will be revoked. Must be an existing user or role.permission- Permission to revoke from the user or role. Supported values:datasourceName- Name of the data source on which the permission will be revoked. Must be an existing data source, or an empty string to revoke permission from all data sources.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionDirectory
public RevokePermissionDirectoryResponse revokePermissionDirectory(RevokePermissionDirectoryRequest request) throws GPUdbException
Revokes a KiFS directory-level permission from a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionDirectory
public RevokePermissionDirectoryResponse revokePermissionDirectory(String name, String permission, String directoryName, Map<String,String> options) throws GPUdbException
Revokes a KiFS directory-level permission from a user or role.- Parameters:
name- Name of the user or role from which the permission will be revoked. Must be an existing user or role.permission- Permission to revoke from the user or role. Supported values:DIRECTORY_READ: For files in the directory, access to list files, download files, or use files in server side functions.DIRECTORY_WRITE: Access to upload files to, or delete files from, the directory. A user or role with write access automatically has read access.
directoryName- Name of the KiFS directory to which the permission revokes accessoptions- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionProc
public RevokePermissionProcResponse revokePermissionProc(RevokePermissionProcRequest request) throws GPUdbException
Revokes a proc-level permission from a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionProc
public RevokePermissionProcResponse revokePermissionProc(String name, String permission, String procName, Map<String,String> options) throws GPUdbException
Revokes a proc-level permission from a user or role.- Parameters:
name- Name of the user or role from which the permission will be revoked. Must be an existing user or role.permission- Permission to revoke from the user or role. Supported values:PROC_ADMIN: Admin access to the proc.PROC_EXECUTE: Execute access to the proc.
procName- Name of the proc to which the permission grants access. Must be an existing proc, or an empty string if the permission grants access to all procs.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionSystem
public RevokePermissionSystemResponse revokePermissionSystem(RevokePermissionSystemRequest request) throws GPUdbException
Revokes a system-level permission from a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionSystem
public RevokePermissionSystemResponse revokePermissionSystem(String name, String permission, Map<String,String> options) throws GPUdbException
Revokes a system-level permission from a user or role.- Parameters:
name- Name of the user or role from which the permission will be revoked. Must be an existing user or role.permission- Permission to revoke from the user or role. Supported values:SYSTEM_ADMIN: Full access to all data and system functions.SYSTEM_USER_ADMIN: Access to administer users and roles that do not have system_admin permission.SYSTEM_WRITE: Read and write access to all tables.SYSTEM_READ: Read-only access to all tables.SYSTEM_SEND_ALERT: Send system alerts.
options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionTable
public RevokePermissionTableResponse revokePermissionTable(RevokePermissionTableRequest request) throws GPUdbException
Revokes a table-level permission from a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokePermissionTable
public RevokePermissionTableResponse revokePermissionTable(String name, String permission, String tableName, Map<String,String> options) throws GPUdbException
Revokes a table-level permission from a user or role.- Parameters:
name- Name of the user or role from which the permission will be revoked. Must be an existing user or role.permission- Permission to revoke from the user or role. Supported values:TABLE_ADMIN: Full read/write and administrative access to the table.TABLE_INSERT: Insert access to the table.TABLE_UPDATE: Update access to the table.TABLE_DELETE: Delete access to the table.TABLE_READ: Read access to the table.
tableName- Name of the table to which the permission grants access, in [schema_name.]table_name format, using standard name resolution rules. Must be an existing table, view or schema.options- Optional parameters.COLUMNS: Apply security to these columns, comma-separated. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokeRole
public RevokeRoleResponse revokeRole(RevokeRoleRequest request) throws GPUdbException
Revokes membership in a role from a user or role.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
revokeRole
public RevokeRoleResponse revokeRole(String role, String member, Map<String,String> options) throws GPUdbException
Revokes membership in a role from a user or role.- Parameters:
role- Name of the role in which membership will be revoked. Must be an existing role.member- Name of the user or role that will be revoked membership inrole. Must be an existing user or role.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showBackup
public ShowBackupResponse showBackup(ShowBackupRequest request) throws GPUdbException
Shows information about one or more backups accessible via the data source specified bydatasourceName.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showBackup
public ShowBackupResponse showBackup(String backupName, String datasourceName, Map<String,String> options) throws GPUdbException
Shows information about one or more backups accessible via the data source specified bydatasourceName.- Parameters:
backupName- Name of the backup. An empty string or '*' will show all existing backups. Any text followed by a '*' will show backups whose name starts with that text. The default value is ''.datasourceName- Data source through which the backup is accessible.options- Optional parameters.BACKUP_ID: ID of the snapshot to show. Leave empty to show information from the most recent snapshot in the backup. The default value is ''.SHOW_CONTENTS: Show the contents of the backed-up snapshots. Supported values:NONE: Don't show snapshot contents.OBJECT_NAMES: Show backed-up object names, and for tables, sizing detail.OBJECT_FILES: Show backed-up object names, and for tables, sizing detail and associated files.
NONE.NO_ERROR_IF_NOT_EXISTS: Whether or not to suppress the error if the specified backup does not exist. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showContainerRegistry
public ShowContainerRegistryResponse showContainerRegistry(ShowContainerRegistryRequest request) throws GPUdbException
- Throws:
GPUdbException
-
showContainerRegistry
public ShowContainerRegistryResponse showContainerRegistry(String registryName, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
showCredential
public ShowCredentialResponse showCredential(ShowCredentialRequest request) throws GPUdbException
Shows information about a specified credential or all credentials.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showCredential
public ShowCredentialResponse showCredential(String credentialName, Map<String,String> options) throws GPUdbException
Shows information about a specified credential or all credentials.- Parameters:
credentialName- Name of the credential on which to retrieve information. The name must refer to a currently existing credential. If '*' is specified, information about all credentials will be returned.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showDatasink
public ShowDatasinkResponse showDatasink(ShowDatasinkRequest request) throws GPUdbException
Shows information about a specified data sink or all data sinks.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showDatasink
public ShowDatasinkResponse showDatasink(String name, Map<String,String> options) throws GPUdbException
Shows information about a specified data sink or all data sinks.- Parameters:
name- Name of the data sink for which to retrieve information. The name must refer to a currently existing data sink. If '*' is specified, information about all data sinks will be returned.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showDatasource
public ShowDatasourceResponse showDatasource(ShowDatasourceRequest request) throws GPUdbException
Shows information about a specified data source or all data sources.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showDatasource
public ShowDatasourceResponse showDatasource(String name, Map<String,String> options) throws GPUdbException
Shows information about a specified data source or all data sources.- Parameters:
name- Name of the data source for which to retrieve information. The name must refer to a currently existing data source. If '*' is specified, information about all data sources will be returned.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showDirectories
public ShowDirectoriesResponse showDirectories(ShowDirectoriesRequest request) throws GPUdbException
Shows information about directories in KiFS. Can be used to show a single directory, or all directories.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showDirectories
public ShowDirectoriesResponse showDirectories(String directoryName, Map<String,String> options) throws GPUdbException
Shows information about directories in KiFS. Can be used to show a single directory, or all directories.- Parameters:
directoryName- The KiFS directory name to show. If empty, shows all directories. The default value is ''.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showEnvironment
public ShowEnvironmentResponse showEnvironment(ShowEnvironmentRequest request) throws GPUdbException
Shows information about a specified user-defined function (UDF) environment or all environments. Returns detailed information about existing environments.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showEnvironment
public ShowEnvironmentResponse showEnvironment(String environmentName, Map<String,String> options) throws GPUdbException
Shows information about a specified user-defined function (UDF) environment or all environments. Returns detailed information about existing environments.- Parameters:
environmentName- Name of the environment on which to retrieve information. The name must refer to a currently existing environment. If '*' or an empty value is specified, information about all environments will be returned. The default value is ''.options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfTRUEand if the environment specified inenvironmentNamedoes not exist, no error is returned. IfFALSEand if the environment specified inenvironmentNamedoes not exist, then an error is returned. Supported values: The default value isFALSE.SHOW_NAMES_ONLY: IfTRUEonly return the names of the installed environments and omit package listing. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showFiles
public ShowFilesResponse showFiles(ShowFilesRequest request) throws GPUdbException
Shows information about files in KiFS. Can be used for individual files, or to show all files in a given directory.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showFiles
public ShowFilesResponse showFiles(List<String> paths, Map<String,String> options) throws GPUdbException
Shows information about files in KiFS. Can be used for individual files, or to show all files in a given directory.- Parameters:
paths- File paths to show. Each path can be a KiFS directory name, or a full path to a KiFS file. File paths may contain wildcard characters after the KiFS directory delimiter. Accepted wildcard characters are asterisk (*) to represent any string of zero or more characters, and question mark (?) to indicate a single character.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showFunctions
public ShowFunctionsResponse showFunctions(ShowFunctionsRequest request) throws GPUdbException
- Throws:
GPUdbException
-
showFunctions
public ShowFunctionsResponse showFunctions(List<String> names, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
showGraph
public ShowGraphResponse showGraph(ShowGraphRequest request) throws GPUdbException
Shows information and characteristics of graphs that exist on the graph server.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showGraph
public ShowGraphResponse showGraph(String graphName, Map<String,String> options) throws GPUdbException
Shows information and characteristics of graphs that exist on the graph server.- Parameters:
graphName- Name of the graph on which to retrieve information. If left as the default value, information about all graphs is returned. The default value is ''.options- Optional parameters.SHOW_ORIGINAL_REQUEST: If set toTRUE, the request that was originally used to create the graph is also returned as JSON. Supported values: The default value isTRUE.SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to get information about all the servers.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showGraphGrammar
public ShowGraphGrammarResponse showGraphGrammar(ShowGraphGrammarRequest request) throws GPUdbException
- Throws:
GPUdbException
-
showGraphGrammar
public ShowGraphGrammarResponse showGraphGrammar(Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
showModel
public ShowModelResponse showModel(ShowModelRequest request) throws GPUdbException
- Throws:
GPUdbException
-
showModel
public ShowModelResponse showModel(List<String> modelNames, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
showProc
public ShowProcResponse showProc(ShowProcRequest request) throws GPUdbException
Shows information about a proc.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showProc
public ShowProcResponse showProc(String procName, Map<String,String> options) throws GPUdbException
Shows information about a proc.- Parameters:
procName- Name of the proc to show information about. If specified, must be the name of a currently existing proc. If not specified, information about all procs will be returned. The default value is ''.options- Optional parameters.INCLUDE_FILES: If set toTRUE, the files that make up the proc will be returned. If set toFALSE, the files will not be returned. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showProcStatus
public ShowProcStatusResponse showProcStatus(ShowProcStatusRequest request) throws GPUdbException
Shows the statuses of running or completed proc instances. Results are grouped by run ID (as returned fromexecuteProc) and data segment ID (each invocation of the proc command on a data segment is assigned a data segment ID).- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showProcStatus
public ShowProcStatusResponse showProcStatus(String runId, Map<String,String> options) throws GPUdbException
Shows the statuses of running or completed proc instances. Results are grouped by run ID (as returned fromexecuteProc) and data segment ID (each invocation of the proc command on a data segment is assigned a data segment ID).- Parameters:
runId- The run ID of a specific proc instance for which the status will be returned. If a proc with a matching run ID is not found, the response will be empty. If not specified, the statuses of all executed proc instances will be returned. The default value is ''.options- Optional parameters.CLEAR_COMPLETE: If set toTRUE, if a proc instance has completed (either successfully or unsuccessfully) then its status will be cleared and no longer returned in subsequent calls. Supported values: The default value isFALSE.RUN_TAG: IfrunIdis specified, return the status for a proc instance that has a matching run ID and a matching run tag that was provided toexecuteProc. IfrunIdis not specified, return statuses for all proc instances where a matching run tag was provided toexecuteProc. The default value is ''.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showResourceObjects
public ShowResourceObjectsResponse showResourceObjects(ShowResourceObjectsRequest request) throws GPUdbException
Returns information about the internal sub-components (tiered objects) which use resources of the system. The request can either return results from actively used objects (default) or it can be used to query the status of the objects of a given list of tables. Returns detailed information about the requested resource objects.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showResourceObjects
public ShowResourceObjectsResponse showResourceObjects(Map<String,String> options) throws GPUdbException
Returns information about the internal sub-components (tiered objects) which use resources of the system. The request can either return results from actively used objects (default) or it can be used to query the status of the objects of a given list of tables. Returns detailed information about the requested resource objects.- Parameters:
options- Optional parameters.TIERS: Comma-separated list of tiers to query, leave blank for all tiers.EXPRESSION: An expression to filter the returned objects. Expression is limited to the following operators: =,!=,<,<=,>,>=,+,-,*,AND,OR,LIKE. For details see Expressions. To use a more complex expression, query the ki_catalog.ki_tiered_objects table directly.ORDER_BY: Single column to be sorted by as well as the sort direction, e.g., 'size asc'. Supported values:LIMIT: An integer indicating the maximum number of results to be returned, per rank, or (-1) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server's own limit, defined by the max_get_records_size parameter in the server configuration. The default value is '100'.TABLE_NAMES: Comma-separated list of tables to restrict the results to. Use '*' to show all tables.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showResourceStatistics
public ShowResourceStatisticsResponse showResourceStatistics(ShowResourceStatisticsRequest request) throws GPUdbException
Requests various statistics for storage/memory tiers and resource groups. Returns statistics on a per-rank basis.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showResourceStatistics
public ShowResourceStatisticsResponse showResourceStatistics(Map<String,String> options) throws GPUdbException
Requests various statistics for storage/memory tiers and resource groups. Returns statistics on a per-rank basis.- Parameters:
options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showResourceGroups
public ShowResourceGroupsResponse showResourceGroups(ShowResourceGroupsRequest request) throws GPUdbException
Requests resource group properties. Returns detailed information about the requested resource groups.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showResourceGroups
public ShowResourceGroupsResponse showResourceGroups(List<String> names, Map<String,String> options) throws GPUdbException
Requests resource group properties. Returns detailed information about the requested resource groups.- Parameters:
names- List of names of groups to be shown. A single entry with an empty string returns all groups.options- Optional parameters.SHOW_DEFAULT_VALUES: IfTRUEinclude values of fields that are based on the default resource group. Supported values: The default value isTRUE.SHOW_DEFAULT_GROUP: IfTRUEinclude the default and system resource groups in the response. This value defaults to false if an explicit list of group names is provided, and true otherwise. Supported values: The default value isTRUE.SHOW_TIER_USAGE: IfTRUEinclude the resource group usage on the worker ranks in the response. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSchema
public ShowSchemaResponse showSchema(ShowSchemaRequest request) throws GPUdbException
Retrieves information about a schema (or all schemas), as specified inschemaName.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSchema
public ShowSchemaResponse showSchema(String schemaName, Map<String,String> options) throws GPUdbException
Retrieves information about a schema (or all schemas), as specified inschemaName.- Parameters:
schemaName- Name of the schema for which to retrieve the information. If blank, then info for all schemas is returned.options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfFALSEwill return an error if the providedschemaNamedoes not exist. IfTRUEthen it will return an empty result if the providedschemaNamedoes not exist. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSecurity
public ShowSecurityResponse showSecurity(ShowSecurityRequest request) throws GPUdbException
Shows security information relating to users and/or roles. If the caller is not a system administrator, only information relating to the caller and their roles is returned.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSecurity
public ShowSecurityResponse showSecurity(List<String> names, Map<String,String> options) throws GPUdbException
Shows security information relating to users and/or roles. If the caller is not a system administrator, only information relating to the caller and their roles is returned.- Parameters:
names- A list of names of users and/or roles about which security information is requested. If none are provided, information about all users and roles will be returned.options- Optional parameters.SHOW_CURRENT_USER: IfTRUE, returns only security information for the current user. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSqlProc
public ShowSqlProcResponse showSqlProc(ShowSqlProcRequest request) throws GPUdbException
Shows information about SQL procedures, including the full definition of each requested procedure.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSqlProc
public ShowSqlProcResponse showSqlProc(String procedureName, Map<String,String> options) throws GPUdbException
Shows information about SQL procedures, including the full definition of each requested procedure.- Parameters:
procedureName- Name of the procedure for which to retrieve the information. If blank, then information about all procedures is returned. The default value is ''.options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfTRUE, no error will be returned if the requested procedure does not exist. IfFALSE, an error will be returned if the requested procedure does not exist. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showStatistics
public ShowStatisticsResponse showStatistics(ShowStatisticsRequest request) throws GPUdbException
Retrieves the collected column statistics for the specified table(s).- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showStatistics
public ShowStatisticsResponse showStatistics(List<String> tableNames, Map<String,String> options) throws GPUdbException
Retrieves the collected column statistics for the specified table(s).- Parameters:
tableNames- Names of tables whose metadata will be fetched, each in [schema_name.]table_name format, using standard name resolution rules. All provided tables must exist, or an error is returned.options- Optional parameters.NO_ERROR_IF_NOT_EXISTS: IfTRUEand if the table names specified intableNamesdoes not exist, no error is returned. IfFALSEand if the table names specified intableNamesdoes not exist, then an error is returned. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSystemProperties
public ShowSystemPropertiesResponse showSystemProperties(ShowSystemPropertiesRequest request) throws GPUdbException
Returns server configuration and version related information to the caller. The admin tool uses it to present server related information to the user.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSystemProperties
public ShowSystemPropertiesResponse showSystemProperties(Map<String,String> options) throws GPUdbException
Returns server configuration and version related information to the caller. The admin tool uses it to present server related information to the user.- Parameters:
options- Optional parameters.PROPERTIES: A list of comma separated names of properties requested. If not specified, all properties will be returned.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSystemStatus
public ShowSystemStatusResponse showSystemStatus(ShowSystemStatusRequest request) throws GPUdbException
Provides server configuration and health related status to the caller. The admin tool uses it to present server related information to the user.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSystemStatus
public ShowSystemStatusResponse showSystemStatus(Map<String,String> options) throws GPUdbException
Provides server configuration and health related status to the caller. The admin tool uses it to present server related information to the user.- Parameters:
options- Optional parameters, currently unused. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSystemTiming
public ShowSystemTimingResponse showSystemTiming(ShowSystemTimingRequest request) throws GPUdbException
Returns the last 100 database requests along with the request timing and internal job ID. The admin tool uses it to present request timing information to the user.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showSystemTiming
public ShowSystemTimingResponse showSystemTiming(Map<String,String> options) throws GPUdbException
Returns the last 100 database requests along with the request timing and internal job ID. The admin tool uses it to present request timing information to the user.- Parameters:
options- Optional parameters, currently unused. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTable
public ShowTableResponse showTable(ShowTableRequest request) throws GPUdbException
Retrieves detailed information about a table, view, or schema, specified intableName. If the suppliedtableNameis a schema the call can return information about either the schema itself or the tables and views it contains. IftableNameis empty, information about all schemas will be returned.If the option
GET_SIZESis set toTRUE, then the number of records in each table is returned (insizesandfullSizes), along with the total number of objects across all requested tables (intotalSizeandtotalFullSize).For a schema, setting the
SHOW_CHILDRENoption toFALSEreturns only information about the schema itself; settingSHOW_CHILDRENtoTRUEreturns a list of tables and views contained in the schema, along with their corresponding detail.To retrieve a list of every table, view, and schema in the database, set
tableNameto '*' andSHOW_CHILDRENtoTRUE. When doing this, the returnedtotalSizeandtotalFullSizewill not include the sizes of non-base tables (e.g., filters, views, joins, etc.).- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTable
public ShowTableResponse showTable(String tableName, Map<String,String> options) throws GPUdbException
Retrieves detailed information about a table, view, or schema, specified intableName. If the suppliedtableNameis a schema the call can return information about either the schema itself or the tables and views it contains. IftableNameis empty, information about all schemas will be returned.If the option
GET_SIZESis set toTRUE, then the number of records in each table is returned (insizesandfullSizes), along with the total number of objects across all requested tables (intotalSizeandtotalFullSize).For a schema, setting the
SHOW_CHILDRENoption toFALSEreturns only information about the schema itself; settingSHOW_CHILDRENtoTRUEreturns a list of tables and views contained in the schema, along with their corresponding detail.To retrieve a list of every table, view, and schema in the database, set
tableNameto '*' andSHOW_CHILDRENtoTRUE. When doing this, the returnedtotalSizeandtotalFullSizewill not include the sizes of non-base tables (e.g., filters, views, joins, etc.).- Parameters:
tableName- Name of the table for which to retrieve the information, in [schema_name.]table_name format, using standard name resolution rules. If blank, then returns information about all tables and views.options- Optional parameters.DEPENDENCIES: Include view dependencies in the output. Supported values: The default value isFALSE.FORCE_SYNCHRONOUS: IfTRUEthen the table sizes will wait for read lock before returning. Supported values: The default value isTRUE.GET_ACCESS_DATA: IfTRUEthen data about the last read, write, alter and create will be returned. Supported values: The default value isFALSE.GET_CACHED_SIZES: IfTRUEthen the number of records in each table, along with a cumulative count, will be returned; blank, otherwise. This version will return the sizes cached at rank 0, which may be stale if there is a multihead insert occurring. Supported values: The default value isFALSE.GET_SIZES: IfTRUEthen the number of records in each table, along with a cumulative count, will be returned; blank, otherwise. Supported values: The default value isFALSE.SKIP_ADDITIONAL_INFO: IfTRUEthen the response will not populate the additional_info field. Supported values: The default value isFALSE.NO_ERROR_IF_NOT_EXISTS: IfFALSEwill return an error if the providedtableNamedoes not exist. IfTRUEthen it will return an empty result. Supported values: The default value isFALSE.SKIP_TEMP_SCHEMAS: IfTRUEthen the table list will not include tables from SYS_TEMP and other system temporary schemas. This is the default behavior for non-admin users. Supported values: The default value isFALSE.SHOW_CHILDREN: IftableNameis a schema, thenTRUEwill return information about the tables and views in the schema, andFALSEwill return information about the schema itself. IftableNameis a table or view,SHOW_CHILDRENmust beFALSE. IftableNameis empty, thenSHOW_CHILDRENmust beTRUE. Supported values: The default value isTRUE.GET_COLUMN_INFO: IfTRUEthen column info (memory usage, etc) will be returned. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTableMetadata
public ShowTableMetadataResponse showTableMetadata(ShowTableMetadataRequest request) throws GPUdbException
Retrieves the user provided metadata for the specified tables.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTableMetadata
public ShowTableMetadataResponse showTableMetadata(List<String> tableNames, Map<String,String> options) throws GPUdbException
Retrieves the user provided metadata for the specified tables.- Parameters:
tableNames- Names of tables whose metadata will be fetched, in [schema_name.]table_name format, using standard name resolution rules. All provided tables must exist, or an error is returned.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTableMonitors
public ShowTableMonitorsResponse showTableMonitors(ShowTableMonitorsRequest request) throws GPUdbException
Show table monitors and their properties. Table monitors are created usingcreateTableMonitor. Returns detailed information about existing table monitors.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTableMonitors
public ShowTableMonitorsResponse showTableMonitors(List<String> monitorIds, Map<String,String> options) throws GPUdbException
Show table monitors and their properties. Table monitors are created usingcreateTableMonitor. Returns detailed information about existing table monitors.- Parameters:
monitorIds- List of monitors to be shown. An empty list or a single entry with an empty string returns all table monitors.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTablesByType
public ShowTablesByTypeResponse showTablesByType(ShowTablesByTypeRequest request) throws GPUdbException
Gets names of the tables whose type matches the given criteria. Each table has a particular type. This type comprises the schema and properties of the table and sometimes a type label. This function allows a look up of the existing tables based on full or partial type information. The operation is synchronous.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTablesByType
public ShowTablesByTypeResponse showTablesByType(String typeId, String label, Map<String,String> options) throws GPUdbException
Gets names of the tables whose type matches the given criteria. Each table has a particular type. This type comprises the schema and properties of the table and sometimes a type label. This function allows a look up of the existing tables based on full or partial type information. The operation is synchronous.- Parameters:
typeId- Type id returned by a call tocreateType.label- Optional user supplied label which can be used instead of the type_id to retrieve all tables with the given label.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTriggers
public ShowTriggersResponse showTriggers(ShowTriggersRequest request) throws GPUdbException
Retrieves information regarding the specified triggers or all existing triggers currently active.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTriggers
public ShowTriggersResponse showTriggers(List<String> triggerIds, Map<String,String> options) throws GPUdbException
Retrieves information regarding the specified triggers or all existing triggers currently active.- Parameters:
triggerIds- List of IDs of the triggers whose information is to be retrieved. An empty list means information will be retrieved on all active triggers.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTypes
public ShowTypesResponse showTypes(ShowTypesRequest request) throws GPUdbException
Retrieves information for the specified data type ID or type label. For all data types that match the input criteria, the database returns the type ID, the type schema, the label (if available), and the type's column properties.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showTypes
public ShowTypesResponse showTypes(String typeId, String label, Map<String,String> options) throws GPUdbException
Retrieves information for the specified data type ID or type label. For all data types that match the input criteria, the database returns the type ID, the type schema, the label (if available), and the type's column properties.- Parameters:
typeId- Type Id returned in response to a call tocreateType.label- Option string that was supplied by user in a call tocreateType.options- Optional parameters.NO_JOIN_TYPES: When set to 'true', no join types will be included. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showVideo
public ShowVideoResponse showVideo(ShowVideoRequest request) throws GPUdbException
Retrieves information about rendered videos.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showVideo
public ShowVideoResponse showVideo(List<String> paths, Map<String,String> options) throws GPUdbException
Retrieves information about rendered videos.- Parameters:
paths- The fully-qualified KiFS paths for the videos to show. If empty, shows all videos.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showWal
public ShowWalResponse showWal(ShowWalRequest request) throws GPUdbException
Requests table write-ahead log (WAL) properties. Returns information about the requested table WAL entries.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
showWal
public ShowWalResponse showWal(List<String> tableNames, Map<String,String> options) throws GPUdbException
Requests table write-ahead log (WAL) properties. Returns information about the requested table WAL entries.- Parameters:
tableNames- List of tables to query. An asterisk returns all tables.options- Optional parameters.SHOW_SETTINGS: IfTRUEinclude a map of the WAL settings for the requested tables. Supported values: The default value isTRUE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
solveGraph
public SolveGraphResponse solveGraph(SolveGraphRequest request) throws GPUdbException
Solves an existing graph for a type of problem (e.g., shortest path, page rank, traveling salesman, etc.) using source nodes, destination nodes, and additional, optional weights and restrictions.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, the Graph REST Tutorial, and/or some /solve/graph examples before using this endpoint.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
solveGraph
public SolveGraphResponse solveGraph(String graphName, List<String> weightsOnEdges, List<String> restrictions, String solverType, List<String> sourceNodes, List<String> destinationNodes, String solutionTable, Map<String,String> options) throws GPUdbException
Solves an existing graph for a type of problem (e.g., shortest path, page rank, traveling salesman, etc.) using source nodes, destination nodes, and additional, optional weights and restrictions.IMPORTANT: It's highly recommended that you review the Graphs and Solvers concepts documentation, the Graph REST Tutorial, and/or some /solve/graph examples before using this endpoint.
- Parameters:
graphName- Name of the graph resource to solve.weightsOnEdges- Additional weights to apply to the edges of an existing graph. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS WEIGHTS_EDGE_ID', expressions, e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED', or constant values, e.g., '{4, 15, 2} AS WEIGHTS_VALUESPECIFIED'. Any provided weights will be added (in the case of 'WEIGHTS_VALUESPECIFIED') to or multiplied with (in the case of 'WEIGHTS_FACTORSPECIFIED') the existing weight(s). If using constant values in an identifier combination, the number of values specified must match across the combination. The default value is an emptyList.restrictions- Additional restrictions to apply to the nodes/edges of an existing graph. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS RESTRICTIONS_EDGE_ID', expressions, e.g., 'column/2 AS RESTRICTIONS_VALUECOMPARED', or constant values, e.g., '{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'. If using constant values in an identifier combination, the number of values specified must match across the combination. If remove_previous_restrictions option is set to true, any provided restrictions will replace the existing restrictions. Otherwise, any provided restrictions will be added (in the case of 'RESTRICTIONS_VALUECOMPARED') to or replaced (in the case of 'RESTRICTIONS_ONOFFCOMPARED'). The default value is an emptyList.solverType- The type of solver to use for the graph. Supported values:SHORTEST_PATH: Solves for the optimal (shortest) path based on weights and restrictions from one source to destinations nodes. Also known as the Dijkstra solver.PAGE_RANK: Solves for the probability of each destination node being visited based on the links of the graph topology. Weights are not required to use this solver.PROBABILITY_RANK: Solves for the transitional probability (Hidden Markov) for each node based on the weights (probability assigned over given edges).CENTRALITY: Solves for the degree of a node to depict how many pairs of individuals that would have to go through the node to reach one another in the minimum number of hops. Also known as betweenness.MULTIPLE_ROUTING: Solves for finding the minimum cost cumulative path for a round-trip starting from the given source and visiting each given destination node once then returning to the source. Also known as the traveling salesman problem.INVERSE_SHORTEST_PATH: Solves for finding the optimal path cost for each destination node to route to the source node. Also known as inverse Dijkstra or the service man routing problem.BACKHAUL_ROUTING: Solves for optimal routes that connect remote asset nodes to the fixed (backbone) asset nodes.ALLPATHS: Solves for paths that would give costs between max and min solution radia - Make sure to limit by the 'max_solution_targets' option. Min cost should be >= shortest_path cost.STATS_ALL: Solves for graph statistics such as graph diameter, longest pairs, vertex valences, topology numbers, average and max cluster sizes, etc.CLOSENESS: Solves for the centrality closeness score per node as the sum of the inverse shortest path costs to all nodes in the graph.
SHORTEST_PATH.sourceNodes- It can be one of the nodal identifiers - e.g: 'NODE_WKTPOINT' for source nodes. ForBACKHAUL_ROUTING, this list depicts the fixed assets. The default value is an emptyList.destinationNodes- It can be one of the nodal identifiers - e.g: 'NODE_WKTPOINT' for destination (target) nodes. ForBACKHAUL_ROUTING, this list depicts the remote assets. The default value is an emptyList.solutionTable- Name of the table to store the solution, in [schema_name.]table_name format, using standard name resolution rules. The default value is 'graph_solutions'.options- Additional parameters.MAX_SOLUTION_RADIUS: ForALLPATHS,SHORTEST_PATHandINVERSE_SHORTEST_PATHsolvers only. Sets the maximum solution cost radius, which ignores thedestinationNodeslist and instead outputs the nodes within the radius sorted by ascending cost. If set to '0.0', the setting is ignored. The default value is '0.0'.MIN_SOLUTION_RADIUS: ForALLPATHS,SHORTEST_PATHandINVERSE_SHORTEST_PATHsolvers only. Applicable only whenMAX_SOLUTION_RADIUSis set. Sets the minimum solution cost radius, which ignores thedestinationNodeslist and instead outputs the nodes within the radius sorted by ascending cost. If set to '0.0', the setting is ignored. The default value is '0.0'.MAX_SOLUTION_TARGETS: ForALLPATHS,SHORTEST_PATHandINVERSE_SHORTEST_PATHsolvers only. Sets the maximum number of solution targets, which ignores thedestinationNodeslist and instead outputs no more than n number of nodes sorted by ascending cost where n is equal to the setting value. If set to 0, the setting is ignored. The default value is '1000'.UNIFORM_WEIGHTS: When specified, assigns the given value to all the edges in the graph. Note that weights provided inweightsOnEdgeswill override this value.LEFT_TURN_PENALTY: This will add an additional weight over the edges labeled as 'left turn' if the 'add_turn' option parameter of thecreateGraphwas invoked at graph creation. The default value is '0.0'.RIGHT_TURN_PENALTY: This will add an additional weight over the edges labeled as' right turn' if the 'add_turn' option parameter of thecreateGraphwas invoked at graph creation. The default value is '0.0'.INTERSECTION_PENALTY: This will add an additional weight over the edges labeled as 'intersection' if the 'add_turn' option parameter of thecreateGraphwas invoked at graph creation. The default value is '0.0'.SHARP_TURN_PENALTY: This will add an additional weight over the edges labeled as 'sharp turn' or 'u-turn' if the 'add_turn' option parameter of thecreateGraphwas invoked at graph creation. The default value is '0.0'.NUM_BEST_PATHS: ForMULTIPLE_ROUTINGsolvers only; sets the number of shortest paths computed from each node. This is the heuristic criterion. Default value of zero allows the number to be computed automatically by the solver. The user may want to override this parameter to speed-up the solver. The default value is '0'.MAX_NUM_COMBINATIONS: ForMULTIPLE_ROUTINGsolvers only; sets the cap on the combinatorial sequences generated. If the default value of two millions is overridden to a lesser value, it can potentially speed up the solver. The default value is '2000000'.OUTPUT_EDGE_PATH: If true then concatenated edge IDs will be added as the EDGE path column of the solution table for each source and target pair in shortest path solves. Supported values: The default value isFALSE.OUTPUT_WKT_PATH: If true then concatenated wkt line segments will be added as the Wktroute column of the solution table for each source and target pair in shortest path solves. Supported values: The default value isTRUE.SERVER_ID: Indicates which graph server(s) to send the request to. Default is to send to the server, amongst those containing the corresponding graph, that has the most computational bandwidth. For SHORTEST_PATH solver type, the input is split amongst the server containing the corresponding graph.CONVERGENCE_LIMIT: ForPAGE_RANKsolvers only; Maximum percent relative threshold on the page rank scores of each node between consecutive iterations to satisfy convergence. Default value is 1 (one) percent. The default value is '1.0'.MAX_ITERATIONS: ForPAGE_RANKsolvers only; Maximum number of page rank iterations for satisfying convergence. Default value is 100. The default value is '100'.MAX_RUNS: For allCENTRALITYsolvers only; Sets the maximum number of shortest path runs; maximum possible value is the number of nodes in the graph. Default value of 0 enables this value to be auto computed by the solver. The default value is '0'.OUTPUT_CLUSTERS: ForSTATS_ALLsolvers only; the cluster index for each node will be inserted as an additional column in the output. Supported values:TRUE: An additional column 'CLUSTER' will be added for each nodeFALSE: No extra cluster info per node will be available in the output
FALSE.SOLVE_HEURISTIC: Specify heuristic search criterion only for the geo graphs and shortest path solves towards a single target. Supported values:ASTAR: Employs A-STAR heuristics to speed up the shortest path traversalNONE: No heuristics are applied
NONE.ASTAR_RADIUS: For path solvers only when 'solve_heuristic' option is 'astar'. The shortest path traversal front includes nodes only within this radius (kilometers) as it moves towards the target location. The default value is '70'.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
updateRecordsRaw
public UpdateRecordsResponse updateRecordsRaw(RawUpdateRecordsRequest request) throws GPUdbException
Runs multiple predicate-based updates in a single call. With the list of given expressions, any matching record's column values will be updated as provided innewValuesMaps. There is also an optional 'upsert' capability where if a particular predicate doesn't match any existing record, then a new record can be inserted.Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure primary key' predicates are allowed when updating primary key values. If the primary key for a table is the column 'attr1', then the operation will only accept predicates of the form: "attr1 == 'foo'" if the attr1 column is being updated. For a composite primary key (e.g. columns 'attr1' and 'attr2') then this operation will only accept predicates of the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary key columns must appear in an equality predicate in the expressions. Furthermore each 'pure primary key' predicate must be unique within a given request. These restrictions can be removed by utilizing some available options through
options.The
UPDATE_ON_EXISTING_PKoption specifies the record primary key collision policy for tables with a primary key, whileIGNORE_EXISTING_PKspecifies the record primary key collision error-suppression policy when those collisions result in the update being rejected. Both are ignored on tables with no primary key.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
updateRecords
public <TRequest> UpdateRecordsResponse updateRecords(UpdateRecordsRequest<TRequest> request) throws GPUdbException
Runs multiple predicate-based updates in a single call. With the list of given expressions, any matching record's column values will be updated as provided innewValuesMaps. There is also an optional 'upsert' capability where if a particular predicate doesn't match any existing record, then a new record can be inserted.Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure primary key' predicates are allowed when updating primary key values. If the primary key for a table is the column 'attr1', then the operation will only accept predicates of the form: "attr1 == 'foo'" if the attr1 column is being updated. For a composite primary key (e.g. columns 'attr1' and 'attr2') then this operation will only accept predicates of the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary key columns must appear in an equality predicate in the expressions. Furthermore each 'pure primary key' predicate must be unique within a given request. These restrictions can be removed by utilizing some available options through
options.The
UPDATE_ON_EXISTING_PKoption specifies the record primary key collision policy for tables with a primary key, whileIGNORE_EXISTING_PKspecifies the record primary key collision error-suppression policy when those collisions result in the update being rejected. Both are ignored on tables with no primary key.- Type Parameters:
TRequest- The type of object being added.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
updateRecords
public <TRequest> UpdateRecordsResponse updateRecords(TypeObjectMap<TRequest> typeObjectMap, UpdateRecordsRequest<TRequest> request) throws GPUdbException
Runs multiple predicate-based updates in a single call. With the list of given expressions, any matching record's column values will be updated as provided innewValuesMaps. There is also an optional 'upsert' capability where if a particular predicate doesn't match any existing record, then a new record can be inserted.Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure primary key' predicates are allowed when updating primary key values. If the primary key for a table is the column 'attr1', then the operation will only accept predicates of the form: "attr1 == 'foo'" if the attr1 column is being updated. For a composite primary key (e.g. columns 'attr1' and 'attr2') then this operation will only accept predicates of the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary key columns must appear in an equality predicate in the expressions. Furthermore each 'pure primary key' predicate must be unique within a given request. These restrictions can be removed by utilizing some available options through
options.The
UPDATE_ON_EXISTING_PKoption specifies the record primary key collision policy for tables with a primary key, whileIGNORE_EXISTING_PKspecifies the record primary key collision error-suppression policy when those collisions result in the update being rejected. Both are ignored on tables with no primary key.- Type Parameters:
TRequest- The type of object being added.- Parameters:
typeObjectMap- Type object map used for encoding input objects.request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
IllegalArgumentException- iftypeObjectMapis not an instance of one of the following:Type,TypeObjectMap,Schema, or aClassthat implementsIndexedRecordGPUdbException- if an error occurs during the operation.
-
updateRecords
public <TRequest> UpdateRecordsResponse updateRecords(String tableName, List<String> expressions, List<Map<String,String>> newValuesMaps, List<TRequest> data, Map<String,String> options) throws GPUdbException
Runs multiple predicate-based updates in a single call. With the list of given expressions, any matching record's column values will be updated as provided innewValuesMaps. There is also an optional 'upsert' capability where if a particular predicate doesn't match any existing record, then a new record can be inserted.Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure primary key' predicates are allowed when updating primary key values. If the primary key for a table is the column 'attr1', then the operation will only accept predicates of the form: "attr1 == 'foo'" if the attr1 column is being updated. For a composite primary key (e.g. columns 'attr1' and 'attr2') then this operation will only accept predicates of the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary key columns must appear in an equality predicate in the expressions. Furthermore each 'pure primary key' predicate must be unique within a given request. These restrictions can be removed by utilizing some available options through
options.The
UPDATE_ON_EXISTING_PKoption specifies the record primary key collision policy for tables with a primary key, whileIGNORE_EXISTING_PKspecifies the record primary key collision error-suppression policy when those collisions result in the update being rejected. Both are ignored on tables with no primary key.- Type Parameters:
TRequest- The type of object being added.- Parameters:
tableName- Name of table to be updated, in [schema_name.]table_name format, using standard name resolution rules. Must be a currently existing table and not a view.expressions- A list of the actual predicates, one for each update; format should follow the guidelineshere.newValuesMaps- List of new values for the matching records. Each element is a map with (key, value) pairs where the keys are the names of the columns whose values are to be updated; the values are the new values. The number of elements in the list should match the length ofexpressions.data- An *optional* list of new binary-avro encoded records to insert, one for each update. If one ofexpressionsdoes not yield a matching record to be updated, then the corresponding element from this list will be added to the table. The default value is an emptyList.options- Optional parameters.GLOBAL_EXPRESSION: An optional global expression to reduce the search space of the predicates listed inexpressions. The default value is ''.BYPASS_SAFETY_CHECKS: When set toTRUE, all predicates are available for primary key updates. Keep in mind that it is possible to destroy data in this case, since a single predicate may match multiple objects (potentially all of records of a table), and then updating all of those records to have the same primary key will, due to the primary key uniqueness constraints, effectively delete all but one of those updated records. Supported values: The default value isFALSE.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for updating a table with a primary key. There are two ways that a record collision can occur. The first is an "update collision", which happens when the update changes the value of the updated record's primary key, and that new primary key already exists as the primary key of another record in the table. The second is an "insert collision", which occurs when a given filter inexpressionsfinds no records to update, and the alternate insert record given indata(orrecordsToInsertStr) contains a primary key matching that of an existing record in the table. IfUPDATE_ON_EXISTING_PKis set toTRUE, "update collisions" will result in the existing record collided into being removed and the record updated with values specified innewValuesMapstaking its place; "insert collisions" will result in the collided-into record being updated with the values indata/recordsToInsertStr(if given). If set toFALSE, the existing collided-into record will remain unchanged, while the update will be rejected and the error handled as determined byIGNORE_EXISTING_PK. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Overwrite the collided-into record when updating a record's primary key or inserting an alternate record causes a primary key collision between the record being updated/inserted and another existing record in the tableFALSE: Reject updates which cause primary key collisions between the record being updated/inserted and an existing record in the table
FALSE.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for updating a table with a primary key, only used when primary key record collisions are rejected (UPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record update that is rejected for resulting in a primary key collision with an existing table record will be ignored with no error generated. IfFALSE, the rejection of any update for resulting in a primary key collision will cause an error to be reported. If the specified table does not have a primary key or ifUPDATE_ON_EXISTING_PKisTRUE, then this option has no effect. Supported values:TRUE: Ignore updates that result in primary key collisions with existing recordsFALSE: Treat as errors any updates that result in primary key collisions with existing records
FALSE.UPDATE_PARTITION: Force qualifying records to be deleted and reinserted so their partition membership will be reevaluated. Supported values: The default value isFALSE.TRUNCATE_STRINGS: If set toTRUE, any strings which are too long for their charN string fields will be truncated to fit. Supported values: The default value isFALSE.USE_EXPRESSIONS_IN_NEW_VALUES_MAPS: When set toTRUE, all new values innewValuesMapsare considered as expression values. When set toFALSE, all new values innewValuesMapsare considered as constants. NOTE: WhenTRUE, string constants will need to be quoted to avoid being evaluated as expressions. Supported values: The default value isFALSE.RECORD_ID: ID of a single record to be updated (returned in the call toinsertRecordsorgetRecordsFromCollection).
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
updateRecords
public <TRequest> UpdateRecordsResponse updateRecords(TypeObjectMap<TRequest> typeObjectMap, String tableName, List<String> expressions, List<Map<String,String>> newValuesMaps, List<TRequest> data, Map<String,String> options) throws GPUdbException
Runs multiple predicate-based updates in a single call. With the list of given expressions, any matching record's column values will be updated as provided innewValuesMaps. There is also an optional 'upsert' capability where if a particular predicate doesn't match any existing record, then a new record can be inserted.Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure primary key' predicates are allowed when updating primary key values. If the primary key for a table is the column 'attr1', then the operation will only accept predicates of the form: "attr1 == 'foo'" if the attr1 column is being updated. For a composite primary key (e.g. columns 'attr1' and 'attr2') then this operation will only accept predicates of the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary key columns must appear in an equality predicate in the expressions. Furthermore each 'pure primary key' predicate must be unique within a given request. These restrictions can be removed by utilizing some available options through
options.The
UPDATE_ON_EXISTING_PKoption specifies the record primary key collision policy for tables with a primary key, whileIGNORE_EXISTING_PKspecifies the record primary key collision error-suppression policy when those collisions result in the update being rejected. Both are ignored on tables with no primary key.- Type Parameters:
TRequest- The type of object being added.- Parameters:
typeObjectMap- Type object map used for encoding input objects.tableName- Name of table to be updated, in [schema_name.]table_name format, using standard name resolution rules. Must be a currently existing table and not a view.expressions- A list of the actual predicates, one for each update; format should follow the guidelineshere.newValuesMaps- List of new values for the matching records. Each element is a map with (key, value) pairs where the keys are the names of the columns whose values are to be updated; the values are the new values. The number of elements in the list should match the length ofexpressions.data- An *optional* list of new binary-avro encoded records to insert, one for each update. If one ofexpressionsdoes not yield a matching record to be updated, then the corresponding element from this list will be added to the table. The default value is an emptyList.options- Optional parameters.GLOBAL_EXPRESSION: An optional global expression to reduce the search space of the predicates listed inexpressions. The default value is ''.BYPASS_SAFETY_CHECKS: When set toTRUE, all predicates are available for primary key updates. Keep in mind that it is possible to destroy data in this case, since a single predicate may match multiple objects (potentially all of records of a table), and then updating all of those records to have the same primary key will, due to the primary key uniqueness constraints, effectively delete all but one of those updated records. Supported values: The default value isFALSE.UPDATE_ON_EXISTING_PK: Specifies the record collision policy for updating a table with a primary key. There are two ways that a record collision can occur. The first is an "update collision", which happens when the update changes the value of the updated record's primary key, and that new primary key already exists as the primary key of another record in the table. The second is an "insert collision", which occurs when a given filter inexpressionsfinds no records to update, and the alternate insert record given indata(orrecordsToInsertStr) contains a primary key matching that of an existing record in the table. IfUPDATE_ON_EXISTING_PKis set toTRUE, "update collisions" will result in the existing record collided into being removed and the record updated with values specified innewValuesMapstaking its place; "insert collisions" will result in the collided-into record being updated with the values indata/recordsToInsertStr(if given). If set toFALSE, the existing collided-into record will remain unchanged, while the update will be rejected and the error handled as determined byIGNORE_EXISTING_PK. If the specified table does not have a primary key, then this option has no effect. Supported values:TRUE: Overwrite the collided-into record when updating a record's primary key or inserting an alternate record causes a primary key collision between the record being updated/inserted and another existing record in the tableFALSE: Reject updates which cause primary key collisions between the record being updated/inserted and an existing record in the table
FALSE.IGNORE_EXISTING_PK: Specifies the record collision error-suppression policy for updating a table with a primary key, only used when primary key record collisions are rejected (UPDATE_ON_EXISTING_PKisFALSE). If set toTRUE, any record update that is rejected for resulting in a primary key collision with an existing table record will be ignored with no error generated. IfFALSE, the rejection of any update for resulting in a primary key collision will cause an error to be reported. If the specified table does not have a primary key or ifUPDATE_ON_EXISTING_PKisTRUE, then this option has no effect. Supported values:TRUE: Ignore updates that result in primary key collisions with existing recordsFALSE: Treat as errors any updates that result in primary key collisions with existing records
FALSE.UPDATE_PARTITION: Force qualifying records to be deleted and reinserted so their partition membership will be reevaluated. Supported values: The default value isFALSE.TRUNCATE_STRINGS: If set toTRUE, any strings which are too long for their charN string fields will be truncated to fit. Supported values: The default value isFALSE.USE_EXPRESSIONS_IN_NEW_VALUES_MAPS: When set toTRUE, all new values innewValuesMapsare considered as expression values. When set toFALSE, all new values innewValuesMapsare considered as constants. NOTE: WhenTRUE, string constants will need to be quoted to avoid being evaluated as expressions. Supported values: The default value isFALSE.RECORD_ID: ID of a single record to be updated (returned in the call toinsertRecordsorgetRecordsFromCollection).
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
IllegalArgumentException- iftypeObjectMapis not an instance of one of the following:Type,TypeObjectMap,Schema, or aClassthat implementsIndexedRecordGPUdbException- if an error occurs during the operation.
-
uploadFiles
public UploadFilesResponse uploadFiles(UploadFilesRequest request) throws GPUdbException
Uploads one or more files to KiFS. There are two methods for uploading files: load files in their entirety, or load files in parts. The latter is recommended for files of approximately 60 MB or larger.To upload files in their entirety, populate
fileNameswith the file names to upload into on KiFS, and their respective byte content infileData.Multiple steps are involved when uploading in multiple parts. Only one file at a time can be uploaded in this manner. A user-provided UUID is utilized to tie all the upload steps together for a given file. To upload a file in multiple parts:
1. Provide the file name in
fileNames, the UUID in theMULTIPART_UPLOAD_UUIDkey inoptions, and aMULTIPART_OPERATIONvalue ofINIT.2. Upload one or more parts by providing the file name, the part data in
fileData, the UUID, aMULTIPART_OPERATIONvalue ofUPLOAD_PART, and the part number in theMULTIPART_UPLOAD_PART_NUMBER. The part numbers must start at 1 and increase incrementally. Parts may not be uploaded out of order.3. Complete the upload by providing the file name, the UUID, and a
MULTIPART_OPERATIONvalue ofCOMPLETE.Multipart uploads in progress may be canceled by providing the file name, the UUID, and a
MULTIPART_OPERATIONvalue ofCANCEL. If an new upload is initialized with a different UUID for an existing upload in progress, the pre-existing upload is automatically canceled in favor of the new upload.The multipart upload must be completed for the file to be usable in KiFS. Information about multipart uploads in progress is available in
showFiles.File data may be pre-encoded using base64 encoding. This should be indicated using the
FILE_ENCODINGoption, and is recommended when using JSON serialization.Each file path must reside in a top-level KiFS directory, i.e. one of the directories listed in
showDirectories. The user must have write permission on the directory. Nested directories are permitted in file name paths. Directories are delineated with the directory separator of '/'. For example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
uploadFiles
public UploadFilesResponse uploadFiles(List<String> fileNames, List<ByteBuffer> fileData, Map<String,String> options) throws GPUdbException
Uploads one or more files to KiFS. There are two methods for uploading files: load files in their entirety, or load files in parts. The latter is recommended for files of approximately 60 MB or larger.To upload files in their entirety, populate
fileNameswith the file names to upload into on KiFS, and their respective byte content infileData.Multiple steps are involved when uploading in multiple parts. Only one file at a time can be uploaded in this manner. A user-provided UUID is utilized to tie all the upload steps together for a given file. To upload a file in multiple parts:
1. Provide the file name in
fileNames, the UUID in theMULTIPART_UPLOAD_UUIDkey inoptions, and aMULTIPART_OPERATIONvalue ofINIT.2. Upload one or more parts by providing the file name, the part data in
fileData, the UUID, aMULTIPART_OPERATIONvalue ofUPLOAD_PART, and the part number in theMULTIPART_UPLOAD_PART_NUMBER. The part numbers must start at 1 and increase incrementally. Parts may not be uploaded out of order.3. Complete the upload by providing the file name, the UUID, and a
MULTIPART_OPERATIONvalue ofCOMPLETE.Multipart uploads in progress may be canceled by providing the file name, the UUID, and a
MULTIPART_OPERATIONvalue ofCANCEL. If an new upload is initialized with a different UUID for an existing upload in progress, the pre-existing upload is automatically canceled in favor of the new upload.The multipart upload must be completed for the file to be usable in KiFS. Information about multipart uploads in progress is available in
showFiles.File data may be pre-encoded using base64 encoding. This should be indicated using the
FILE_ENCODINGoption, and is recommended when using JSON serialization.Each file path must reside in a top-level KiFS directory, i.e. one of the directories listed in
showDirectories. The user must have write permission on the directory. Nested directories are permitted in file name paths. Directories are delineated with the directory separator of '/'. For example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.
- Parameters:
fileNames- An array of full file name paths to be used for the files uploaded to KiFS. File names may have any number of nested directories in their paths, but the top-level directory must be an existing KiFS directory. Each file must reside in or under a top-level directory. A full file name path cannot be larger than 1024 characters.fileData- File data for the files being uploaded, for the respective files infileNames.options- Optional parameters.FILE_ENCODING: Encoding that has been applied to the uploaded file data. When using JSON serialization it is recommended to utilizeBASE64. The caller is responsible for encoding the data provided in this payload. Supported values:BASE64: Specifies that the file data being uploaded has been base64 encoded.NONE: The uploaded file data has not been encoded.
NONE.MULTIPART_OPERATION: Multipart upload operation to perform. Supported values:NONE: Default, indicates this is not a multipart uploadINIT: Initialize a multipart file uploadUPLOAD_PART: Uploads a part of the specified multipart file uploadCOMPLETE: Complete the specified multipart file uploadCANCEL: Cancel the specified multipart file upload
NONE.MULTIPART_UPLOAD_UUID: UUID to uniquely identify a multipart uploadMULTIPART_UPLOAD_PART_NUMBER: Incremental part number for each part in a multipart upload. Part numbers start at 1, increment by 1, and must be uploaded sequentiallyDELETE_IF_EXISTS: IfTRUE, any existing files specified infileNameswill be deleted prior to start of upload. Otherwise the file is replaced once the upload completes. Rollback of the original file is no longer possible if the upload is cancelled, aborted or fails if the file was deleted beforehand. Supported values: The default value isFALSE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
uploadFilesFromurl
public UploadFilesFromurlResponse uploadFilesFromurl(UploadFilesFromurlRequest request) throws GPUdbException
Uploads one or more files to KiFS.Each file path must reside in a top-level KiFS directory, i.e. one of the directories listed in
showDirectories. The user must have write permission on the directory. Nested directories are permitted in file name paths. Directories are delineated with the directory separator of '/'. For example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.
- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
uploadFilesFromurl
public UploadFilesFromurlResponse uploadFilesFromurl(List<String> fileNames, List<String> urls, Map<String,String> options) throws GPUdbException
Uploads one or more files to KiFS.Each file path must reside in a top-level KiFS directory, i.e. one of the directories listed in
showDirectories. The user must have write permission on the directory. Nested directories are permitted in file name paths. Directories are delineated with the directory separator of '/'. For example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.
- Parameters:
fileNames- An array of full file name paths to be used for the files uploaded to KiFS. File names may have any number of nested directories in their paths, but the top-level directory must be an existing KiFS directory. Each file must reside in or under a top-level directory. A full file name path cannot be larger than 1024 characters.urls- List of URLs to upload, for each respective file infileNames.options- Optional parameters. The default value is an emptyMap.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
visualizeGetFeatureInfo
public VisualizeGetFeatureInfoResponse visualizeGetFeatureInfo(VisualizeGetFeatureInfoRequest request) throws GPUdbException
- Throws:
GPUdbException
-
visualizeGetFeatureInfo
public VisualizeGetFeatureInfoResponse visualizeGetFeatureInfo(List<String> tableNames, List<String> xColumnNames, List<String> yColumnNames, List<String> geometryColumnNames, List<List<String>> queryColumnNames, String projection, double minX, double maxX, double minY, double maxY, int width, int height, int x, int y, int radius, long limit, String encoding, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
visualizeImage
public VisualizeImageResponse visualizeImage(VisualizeImageRequest request) throws GPUdbException
- Throws:
GPUdbException
-
visualizeImage
public VisualizeImageResponse visualizeImage(List<String> tableNames, List<String> worldTableNames, String xColumnName, String yColumnName, String symbolColumnName, String geometryColumnName, List<List<String>> trackIds, double minX, double maxX, double minY, double maxY, int width, int height, String projection, long bgColor, Map<String,List<String>> styleOptions, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
visualizeImageChart
public VisualizeImageChartResponse visualizeImageChart(VisualizeImageChartRequest request) throws GPUdbException
Scatter plot is the only plot type currently supported. A non-numeric column can be specified as x or y column and jitters can be added to them to avoid excessive overlapping. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). The image is contained in theimageDatafield.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
visualizeImageChart
public VisualizeImageChartResponse visualizeImageChart(String tableName, List<String> xColumnNames, List<String> yColumnNames, double minX, double maxX, double minY, double maxY, int width, int height, String bgColor, Map<String,List<String>> styleOptions, Map<String,String> options) throws GPUdbException
Scatter plot is the only plot type currently supported. A non-numeric column can be specified as x or y column and jitters can be added to them to avoid excessive overlapping. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). The image is contained in theimageDatafield.- Parameters:
tableName- Name of the table containing the data to be drawn as a chart, in [schema_name.]table_name format, using standard name resolution rules.xColumnNames- Names of the columns containing the data mapped to the x axis of a chart.yColumnNames- Names of the columns containing the data mapped to the y axis of a chart.minX- Lower bound for the x column values. For non-numeric x column, each x column item is mapped to an integral value starting from 0.maxX- Upper bound for the x column values. For non-numeric x column, each x column item is mapped to an integral value starting from 0.minY- Lower bound for the y column values. For non-numeric y column, each y column item is mapped to an integral value starting from 0.maxY- Upper bound for the y column values. For non-numeric y column, each y column item is mapped to an integral value starting from 0.width- Width of the generated image in pixels.height- Height of the generated image in pixels.bgColor- Background color of the generated image.styleOptions- Rendering style options for a chart.POINTCOLOR: The color of points in the plot represented as a hexadecimal number. The default value is '0000FF'.POINTSIZE: The size of points in the plot represented as number of pixels. The default value is '3'.POINTSHAPE: The shape of points in the plot. Supported values: The default value isSQUARE.CB_POINTCOLORS: Point color class break information consisting of three entries: class-break attribute, class-break values/ranges, and point color values. This option overrides the pointcolor option if both are provided. Class-break ranges are represented in the form of "min:max". Class-break values/ranges and point color values are separated by cb_delimiter, e.g. {"price", "20:30;30:40;40:50", "0xFF0000;0x00FF00;0x0000FF"}.CB_POINTSIZES: Point size class break information consisting of three entries: class-break attribute, class-break values/ranges, and point size values. This option overrides the pointsize option if both are provided. Class-break ranges are represented in the form of "min:max". Class-break values/ranges and point size values are separated by cb_delimiter, e.g. {"states", "NY;TX;CA", "3;5;7"}.CB_POINTSHAPES: Point shape class break information consisting of three entries: class-break attribute, class-break values/ranges, and point shape names. This option overrides the pointshape option if both are provided. Class-break ranges are represented in the form of "min:max". Class-break values/ranges and point shape names are separated by cb_delimiter, e.g. {"states", "NY;TX;CA", "circle;square;diamond"}.CB_DELIMITER: A character or string which separates per-class values in a class-break style option string. The default value is ';'.X_ORDER_BY: An expression or aggregate expression by which non-numeric x column values are sorted, e.g. "avg(price) descending".Y_ORDER_BY: An expression or aggregate expression by which non-numeric y column values are sorted, e.g. "avg(price)", which defaults to "avg(price) ascending".SCALE_TYPE_X: Type of x axis scale. Supported values: The default value isNONE.SCALE_TYPE_Y: Type of y axis scale. Supported values: The default value isNONE.MIN_MAX_SCALED: If this options is set to "false", this endpoint expects request's min/max values are not yet scaled. They will be scaled according to scale_type_x or scale_type_y for response. If this options is set to "true", this endpoint expects request's min/max values are already scaled according to scale_type_x/scale_type_y. Response's min/max values will be equal to request's min/max values. The default value is 'false'.JITTER_X: Amplitude of horizontal jitter applied to non-numeric x column values. The default value is '0.0'.JITTER_Y: Amplitude of vertical jitter applied to non-numeric y column values. The default value is '0.0'.PLOT_ALL: If this options is set to "true", all non-numeric column values are plotted ignoring min_x, max_x, min_y and max_y parameters. The default value is 'false'.
options- Optional parameters.IMAGE_ENCODING: Encoding to be applied to the output image. When using JSON serialization it is recommended to specify this asBASE64. Supported values:BASE64: Apply base64 encoding to the output image.NONE: Do not apply any additional encoding to the output image.
NONE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
visualizeImageClassbreak
public VisualizeImageClassbreakResponse visualizeImageClassbreak(VisualizeImageClassbreakRequest request) throws GPUdbException
- Throws:
GPUdbException
-
visualizeImageClassbreak
public VisualizeImageClassbreakResponse visualizeImageClassbreak(List<String> tableNames, List<String> worldTableNames, String xColumnName, String yColumnName, String symbolColumnName, String geometryColumnName, List<List<String>> trackIds, String cbAttr, List<String> cbVals, String cbPointcolorAttr, List<String> cbPointcolorVals, String cbPointalphaAttr, List<String> cbPointalphaVals, String cbPointsizeAttr, List<String> cbPointsizeVals, String cbPointshapeAttr, List<String> cbPointshapeVals, double minX, double maxX, double minY, double maxY, int width, int height, String projection, long bgColor, Map<String,List<String>> styleOptions, Map<String,String> options, List<Integer> cbTransparencyVec) throws GPUdbException
- Throws:
GPUdbException
-
visualizeImageContour
public VisualizeImageContourResponse visualizeImageContour(VisualizeImageContourRequest request) throws GPUdbException
- Throws:
GPUdbException
-
visualizeImageContour
public VisualizeImageContourResponse visualizeImageContour(List<String> tableNames, String xColumnName, String yColumnName, String valueColumnName, double minX, double maxX, double minY, double maxY, int width, int height, String projection, Map<String,String> styleOptions, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
visualizeImageHeatmap
public VisualizeImageHeatmapResponse visualizeImageHeatmap(VisualizeImageHeatmapRequest request) throws GPUdbException
- Throws:
GPUdbException
-
visualizeImageHeatmap
public VisualizeImageHeatmapResponse visualizeImageHeatmap(List<String> tableNames, String xColumnName, String yColumnName, String valueColumnName, String geometryColumnName, double minX, double maxX, double minY, double maxY, int width, int height, String projection, Map<String,String> styleOptions, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
visualizeImageLabels
public VisualizeImageLabelsResponse visualizeImageLabels(VisualizeImageLabelsRequest request) throws GPUdbException
- Throws:
GPUdbException
-
visualizeImageLabels
public VisualizeImageLabelsResponse visualizeImageLabels(String tableName, String xColumnName, String yColumnName, String xOffset, String yOffset, String textString, String font, String textColor, String textAngle, String textScale, String drawBox, String drawLeader, String lineWidth, String lineColor, String fillColor, String leaderXColumnName, String leaderYColumnName, String filter, double minX, double maxX, double minY, double maxY, int width, int height, String projection, Map<String,String> options) throws GPUdbException
- Throws:
GPUdbException
-
visualizeIsochrone
public VisualizeIsochroneResponse visualizeIsochrone(VisualizeIsochroneRequest request) throws GPUdbException
Generate an image containing isolines for travel results using an existing graph. Isolines represent curves of equal cost, with cost typically referring to the time or distance assigned as the weights of the underlying graph. See Graphs and Solvers for more information on graphs.- Parameters:
request-Requestobject containing the parameters for the operation.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
visualizeIsochrone
public VisualizeIsochroneResponse visualizeIsochrone(String graphName, String sourceNode, double maxSolutionRadius, List<String> weightsOnEdges, List<String> restrictions, int numLevels, boolean generateImage, String levelsTable, Map<String,String> styleOptions, Map<String,String> solveOptions, Map<String,String> contourOptions, Map<String,String> options) throws GPUdbException
Generate an image containing isolines for travel results using an existing graph. Isolines represent curves of equal cost, with cost typically referring to the time or distance assigned as the weights of the underlying graph. See Graphs and Solvers for more information on graphs.- Parameters:
graphName- Name of the graph on which the isochrone is to be computed.sourceNode- Starting vertex on the underlying graph from/to which the isochrones are created.maxSolutionRadius- Extent of the search radius aroundsourceNode. Set to '-1.0' for unrestricted search radius. The default value is -1.0.weightsOnEdges- Additional weights to apply to the edges of an existing graph. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS WEIGHTS_EDGE_ID', or expressions, e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED'. Any provided weights will be added (in the case of 'WEIGHTS_VALUESPECIFIED') to or multiplied with (in the case of 'WEIGHTS_FACTORSPECIFIED') the existing weight(s). The default value is an emptyList.restrictions- Additional restrictions to apply to the nodes/edges of an existing graph. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., 'table.column AS RESTRICTIONS_EDGE_ID', or expressions, e.g., 'column/2 AS RESTRICTIONS_VALUECOMPARED'. IfREMOVE_PREVIOUS_RESTRICTIONSis set toTRUE, any provided restrictions will replace the existing restrictions. IfREMOVE_PREVIOUS_RESTRICTIONSis set toFALSE, any provided restrictions will be added (in the case of 'RESTRICTIONS_VALUECOMPARED') to or replaced (in the case of 'RESTRICTIONS_ONOFFCOMPARED'). The default value is an emptyList.numLevels- Number of equally-separated isochrones to compute. The default value is 1.generateImage- If set toTRUE, generates a PNG image of the isochrones in the response. Supported values:truefalse
true.levelsTable- Name of the table to output the isochrones to, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. The table will contain levels and their corresponding WKT geometry. If no value is provided, the table is not generated. The default value is ''.styleOptions- Various style related options of the isochrone image.LINE_SIZE: The width of the contour lines in pixels. The default value is '3'. The minimum allowed value is '0'. The maximum allowed value is '20'.COLOR: Color of generated isolines. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). If alpha is specified and flooded contours are enabled, it will be used for as the transparency of the latter. The default value is 'FF696969'.BG_COLOR: WhengenerateImageis set toTRUE, background color of the generated image. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). The default value is '00000000'.TEXT_COLOR: WhenADD_LABELSis set toTRUE, color for the labels. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). The default value is 'FF000000'.COLORMAP: Colormap for contours or fill-in regions when applicable. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). Supported values:JETACCENTAFMHOTAUTUMNBINARYBLUESBONEBRBGBRGBUGNBUPUBWRCMRMAPCOOLCOOLWARMCOPPERCUBEHELIXDARK2FLAGGIST_EARTHGIST_GRAYGIST_HEATGIST_NCARGIST_RAINBOWGIST_STERNGIST_YARGGNBUGNUPLOT2GNUPLOTGRAYGREENSGREYSHOTHSVINFERNOMAGMANIPY_SPECTRALOCEANORANGESORRDPAIREDPASTEL1PASTEL2PINKPIYGPLASMAPRGNPRISMPUBUPUBUGNPUORPURDPURPLESRAINBOWRDBURDGYRDPURDYLBURDYLGNREDSSEISMICSET1SET2SET3SPECTRALSPRINGSUMMERTERRAINVIRIDISWINTERWISTIAYLGNYLGNBUYLORBRYLORRD
JET.
solveOptions- Solver specific parameters.REMOVE_PREVIOUS_RESTRICTIONS: Ignore the restrictions applied to the graph during the creation stage and only use the restrictions specified in this request if set toTRUE. Supported values: The default value isFALSE.RESTRICTION_THRESHOLD_VALUE: Value-based restriction comparison. Any node or edge with a 'RESTRICTIONS_VALUECOMPARED' value greater than theRESTRICTION_THRESHOLD_VALUEwill not be included in the solution.UNIFORM_WEIGHTS: When specified, assigns the given value to all the edges in the graph. Note that weights provided inweightsOnEdgeswill override this value.
Map.contourOptions- Solver specific parameters.PROJECTION: Spatial Reference System (i.e. EPSG Code). Supported values: The default value isPLATE_CARREE.WIDTH: WhengenerateImageis set toTRUE, width of the generated image. The default value is '512'.HEIGHT: WhengenerateImageis set toTRUE, height of the generated image. If the default value is used, theHEIGHTis set to the value resulting from multiplying the aspect ratio by theWIDTH. The default value is '-1'.SEARCH_RADIUS: When interpolating the graph solution to generate the isochrone, neighborhood of influence of sample data (in percent of the image/grid). The default value is '20'.GRID_SIZE: When interpolating the graph solution to generate the isochrone, number of subdivisions along the x axis when building the grid (the y is computed using the aspect ratio of the output image). The default value is '100'.COLOR_ISOLINES: Color each isoline according to the colormap; otherwise, use the foreground color. Supported values: The default value isTRUE.ADD_LABELS: If set toTRUE, add labels to the isolines. Supported values: The default value isFALSE.LABELS_FONT_SIZE: WhenADD_LABELSis set toTRUE, size of the font (in pixels) to use for labels. The default value is '12'.LABELS_FONT_FAMILY: WhenADD_LABELSis set toTRUE, font name to be used when adding labels. The default value is 'arial'.LABELS_SEARCH_WINDOW: WhenADD_LABELSis set toTRUE, a search window is used to rate the local quality of each isoline. Smooth, continuous, long stretches with relatively flat angles are favored. The provided value is multiplied by theLABELS_FONT_SIZEto calculate the final window size. The default value is '4'.LABELS_INTRALEVEL_SEPARATION: WhenADD_LABELSis set toTRUE, this value determines the distance (in multiples of theLABELS_FONT_SIZE) to use when separating labels of different values. The default value is '4'.LABELS_INTERLEVEL_SEPARATION: WhenADD_LABELSis set toTRUE, this value determines the distance (in percent of the total window size) to use when separating labels of the same value. The default value is '20'.LABELS_MAX_ANGLE: WhenADD_LABELSis set toTRUE, maximum angle (in degrees) from the vertical to use when adding labels. The default value is '60'.
Map.options- Additional parameters.SOLVE_TABLE: Name of the table to host intermediate solve results, in [schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. This table will contain the position and cost for each vertex in the graph. If the default value is used, a temporary table is created and deleted once the solution is calculated. The default value is ''.IS_REPLICATED: If set toTRUE, replicate theSOLVE_TABLE. Supported values: The default value isTRUE.DATA_MIN_X: Lower bound for the x values. If not provided, it will be computed from the bounds of the input data.DATA_MAX_X: Upper bound for the x values. If not provided, it will be computed from the bounds of the input data.DATA_MIN_Y: Lower bound for the y values. If not provided, it will be computed from the bounds of the input data.DATA_MAX_Y: Upper bound for the y values. If not provided, it will be computed from the bounds of the input data.CONCAVITY_LEVEL: Factor to qualify the concavity of the isochrone curves. The lower the value, the more convex (with '0' being completely convex and '1' being the most concave). The default value is '0.5'. The minimum allowed value is '0'. The maximum allowed value is '1'.USE_PRIORITY_QUEUE_SOLVERS: sets the solver methods explicitly if true. Supported values:TRUE: uses the solvers scheduled for 'shortest_path' and 'inverse_shortest_path' based on solve_directionFALSE: uses the solvers 'priority_queue' and 'inverse_priority_queue' based on solve_direction
FALSE.SOLVE_DIRECTION: Specify whether we are going to the source node, or starting from it. Supported values:FROM_SOURCE: Shortest path to get to the source (inverse Dijkstra)TO_SOURCE: Shortest path to source (Dijkstra)
FROM_SOURCE.
Map.- Returns:
Responseobject containing the results of the operation.- Throws:
GPUdbException- if an error occurs during the operation.
-
-