public class GPUdb extends GPUdbBase
GPUdb
instances are thread safe and may be used from any number of threads
simultaneously.GPUdbBase.ClusterAddressInfo, GPUdbBase.FailbackOptions, GPUdbBase.GetRecordsJsonResponse, GPUdbBase.GPUdbExitException, GPUdbBase.GPUdbFailoverDisabledException, GPUdbBase.GPUdbHAUnavailableException, GPUdbBase.GPUdbHostnameRegexFailureException, GPUdbBase.GPUdbUnauthorizedAccessException, GPUdbBase.GPUdbVersion, GPUdbBase.HAFailoverOrder, GPUdbBase.HASynchronicityMode, GPUdbBase.InsertRecordsJsonRequest, GPUdbBase.JsonOptions, GPUdbBase.Options, GPUdbBase.SubmitException
END_OF_SET, HEADER_AUTHORIZATION, HEADER_CONTENT_TYPE, HEADER_HA_SYNC_MODE, PROTECTED_HEADERS, SslErrorMessageFormat
Constructor and Description |
---|
GPUdb(List<URL> urls)
Creates a
GPUdb instance for the GPUdb server with the
specified URLs using default options. |
GPUdb(List<URL> urls,
GPUdbBase.Options options)
Creates a
GPUdb instance for the GPUdb server with the
specified URLs using the specified options. |
GPUdb(String url)
Creates a
GPUdb instance for the GPUdb server at the
specified URL using default options. |
GPUdb(String url,
GPUdbBase.Options options)
Creates a
GPUdb instance for the GPUdb server at the
specified URL using the specified options. |
GPUdb(URL url)
Creates a
GPUdb instance for the GPUdb server at the
specified URL using default options. |
GPUdb(URL url,
GPUdbBase.Options options)
Creates a
GPUdb instance for the GPUdb server at the
specified URL using the specified options. |
Modifier and Type | Method and Description | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AdminAddHostResponse |
adminAddHost(AdminAddHostRequest request)
Adds a host to an existing cluster.
|
||||||||||||||||||||||||
AdminAddHostResponse |
adminAddHost(String hostAddress,
Map<String,String> options)
Adds a host to an existing cluster.
|
||||||||||||||||||||||||
AdminAddRanksResponse |
adminAddRanks(AdminAddRanksRequest request)
Add one or more ranks to an existing Kinetica cluster.
|
||||||||||||||||||||||||
AdminAddRanksResponse |
adminAddRanks(List<String> hosts,
List<Map<String,String>> configParams,
Map<String,String> options)
Add one or more ranks to an existing Kinetica cluster.
|
||||||||||||||||||||||||
AdminAlterHostResponse |
adminAlterHost(AdminAlterHostRequest request)
Alter properties on an existing host in the cluster.
|
||||||||||||||||||||||||
AdminAlterHostResponse |
adminAlterHost(String host,
Map<String,String> options)
Alter properties on an existing host in the cluster.
|
||||||||||||||||||||||||
AdminAlterJobsResponse |
adminAlterJobs(AdminAlterJobsRequest request)
Perform the requested action on a list of one or more job(s).
|
||||||||||||||||||||||||
AdminAlterJobsResponse |
adminAlterJobs(List<Long> jobIds,
String action,
Map<String,String> options)
Perform the requested action on a list of one or more job(s).
|
||||||||||||||||||||||||
AdminBackupBeginResponse |
adminBackupBegin(AdminBackupBeginRequest request)
Prepares the system for a backup by closing all open file handles after
allowing current active jobs to complete.
|
||||||||||||||||||||||||
AdminBackupBeginResponse |
adminBackupBegin(Map<String,String> options)
Prepares the system for a backup by closing all open file handles after
allowing current active jobs to complete.
|
||||||||||||||||||||||||
AdminBackupEndResponse |
adminBackupEnd(AdminBackupEndRequest request)
Restores the system to normal operating mode after a backup has
completed, allowing any queries that were blocked to complete.
|
||||||||||||||||||||||||
AdminBackupEndResponse |
adminBackupEnd(Map<String,String> options)
Restores the system to normal operating mode after a backup has
completed, allowing any queries that were blocked to complete.
|
||||||||||||||||||||||||
AdminHaRefreshResponse |
adminHaRefresh(AdminHaRefreshRequest request)
Restarts the HA processing on the given cluster as a mechanism of
accepting breaking HA conf changes.
|
||||||||||||||||||||||||
AdminHaRefreshResponse |
adminHaRefresh(Map<String,String> options)
Restarts the HA processing on the given cluster as a mechanism of
accepting breaking HA conf changes.
|
||||||||||||||||||||||||
AdminOfflineResponse |
adminOffline(AdminOfflineRequest request)
Take the system offline.
|
||||||||||||||||||||||||
AdminOfflineResponse |
adminOffline(boolean offline,
Map<String,String> options)
Take the system offline.
|
||||||||||||||||||||||||
AdminRebalanceResponse |
adminRebalance(AdminRebalanceRequest request)
Rebalance the data in the cluster so that all nodes contain an equal
number of records approximately and/or rebalance the shards to be
equally distributed (as much as possible) across all the ranks.
|
||||||||||||||||||||||||
AdminRebalanceResponse |
adminRebalance(Map<String,String> options)
Rebalance the data in the cluster so that all nodes contain an equal
number of records approximately and/or rebalance the shards to be
equally distributed (as much as possible) across all the ranks.
|
||||||||||||||||||||||||
AdminRemoveHostResponse |
adminRemoveHost(AdminRemoveHostRequest request)
Removes a host from an existing cluster.
|
||||||||||||||||||||||||
AdminRemoveHostResponse |
adminRemoveHost(String host,
Map<String,String> options)
Removes a host from an existing cluster.
|
||||||||||||||||||||||||
AdminRemoveRanksResponse |
adminRemoveRanks(AdminRemoveRanksRequest request)
Remove one or more ranks from an existing Kinetica cluster.
|
||||||||||||||||||||||||
AdminRemoveRanksResponse |
adminRemoveRanks(List<String> ranks,
Map<String,String> options)
Remove one or more ranks from an existing Kinetica cluster.
|
||||||||||||||||||||||||
AdminRepairTableResponse |
adminRepairTable(AdminRepairTableRequest request)
Manually repair a corrupted table.
|
||||||||||||||||||||||||
AdminRepairTableResponse |
adminRepairTable(List<String> tableNames,
Map<String,String> options)
Manually repair a corrupted table.
|
||||||||||||||||||||||||
AdminShowAlertsResponse |
adminShowAlerts(AdminShowAlertsRequest request)
Requests a list of the most recent alerts.
|
||||||||||||||||||||||||
AdminShowAlertsResponse |
adminShowAlerts(int numAlerts,
Map<String,String> options)
Requests a list of the most recent alerts.
|
||||||||||||||||||||||||
AdminShowClusterOperationsResponse |
adminShowClusterOperations(AdminShowClusterOperationsRequest request)
Requests the detailed status of the current operation (by default) or a
prior cluster operation specified by
historyIndex . |
||||||||||||||||||||||||
AdminShowClusterOperationsResponse |
adminShowClusterOperations(int historyIndex,
Map<String,String> options)
Requests the detailed status of the current operation (by default) or a
prior cluster operation specified by
historyIndex . |
||||||||||||||||||||||||
AdminShowJobsResponse |
adminShowJobs(AdminShowJobsRequest request)
Get a list of the current jobs in GPUdb.
|
||||||||||||||||||||||||
AdminShowJobsResponse |
adminShowJobs(Map<String,String> options)
Get a list of the current jobs in GPUdb.
|
||||||||||||||||||||||||
AdminShowShardsResponse |
adminShowShards(AdminShowShardsRequest request)
Show the mapping of shards to the corresponding rank and tom.
|
||||||||||||||||||||||||
AdminShowShardsResponse |
adminShowShards(Map<String,String> options)
Show the mapping of shards to the corresponding rank and tom.
|
||||||||||||||||||||||||
AdminShutdownResponse |
adminShutdown(AdminShutdownRequest request)
Exits the database server application.
|
||||||||||||||||||||||||
AdminShutdownResponse |
adminShutdown(String exitType,
String authorization,
Map<String,String> options)
Exits the database server application.
|
||||||||||||||||||||||||
AdminSwitchoverResponse |
adminSwitchover(AdminSwitchoverRequest request)
Manually switch over one or more processes to another host.
|
||||||||||||||||||||||||
AdminSwitchoverResponse |
adminSwitchover(List<String> processes,
List<String> destinations,
Map<String,String> options)
Manually switch over one or more processes to another host.
|
||||||||||||||||||||||||
AdminVerifyDbResponse |
adminVerifyDb(AdminVerifyDbRequest request)
Verify database is in a consistent state.
|
||||||||||||||||||||||||
AdminVerifyDbResponse |
adminVerifyDb(Map<String,String> options)
Verify database is in a consistent state.
|
||||||||||||||||||||||||
AggregateConvexHullResponse |
aggregateConvexHull(AggregateConvexHullRequest request)
Calculates and returns the convex hull for the values in a table
specified by
tableName . |
||||||||||||||||||||||||
AggregateConvexHullResponse |
aggregateConvexHull(String tableName,
String xColumnName,
String yColumnName,
Map<String,String> options)
Calculates and returns the convex hull for the values in a table
specified by
tableName . |
||||||||||||||||||||||||
AggregateGroupByResponse |
aggregateGroupBy(AggregateGroupByRequest request)
Calculates unique combinations (groups) of values for the given columns
in a given table or view and computes aggregates on each unique
combination.
|
||||||||||||||||||||||||
AggregateGroupByResponse |
aggregateGroupBy(String tableName,
List<String> columnNames,
long offset,
long limit,
Map<String,String> options)
Calculates unique combinations (groups) of values for the given columns
in a given table or view and computes aggregates on each unique
combination.
|
||||||||||||||||||||||||
RawAggregateGroupByResponse |
aggregateGroupByRaw(AggregateGroupByRequest request)
Calculates unique combinations (groups) of values for the given columns
in a given table or view and computes aggregates on each unique
combination.
|
||||||||||||||||||||||||
AggregateHistogramResponse |
aggregateHistogram(AggregateHistogramRequest request)
Performs a histogram calculation given a table, a column, and an
interval function.
|
||||||||||||||||||||||||
AggregateHistogramResponse |
aggregateHistogram(String tableName,
String columnName,
double start,
double end,
double interval,
Map<String,String> options)
Performs a histogram calculation given a table, a column, and an
interval function.
|
||||||||||||||||||||||||
AggregateKMeansResponse |
aggregateKMeans(AggregateKMeansRequest request)
This endpoint runs the k-means algorithm - a heuristic algorithm that
attempts to do k-means clustering.
|
||||||||||||||||||||||||
AggregateKMeansResponse |
aggregateKMeans(String tableName,
List<String> columnNames,
int k,
double tolerance,
Map<String,String> options)
This endpoint runs the k-means algorithm - a heuristic algorithm that
attempts to do k-means clustering.
|
||||||||||||||||||||||||
AggregateMinMaxResponse |
aggregateMinMax(AggregateMinMaxRequest request)
Calculates and returns the minimum and maximum values of a particular
column in a table.
|
||||||||||||||||||||||||
AggregateMinMaxResponse |
aggregateMinMax(String tableName,
String columnName,
Map<String,String> options)
Calculates and returns the minimum and maximum values of a particular
column in a table.
|
||||||||||||||||||||||||
AggregateMinMaxGeometryResponse |
aggregateMinMaxGeometry(AggregateMinMaxGeometryRequest request)
Calculates and returns the minimum and maximum x- and y-coordinates of a
particular geospatial geometry column in a table.
|
||||||||||||||||||||||||
AggregateMinMaxGeometryResponse |
aggregateMinMaxGeometry(String tableName,
String columnName,
Map<String,String> options)
Calculates and returns the minimum and maximum x- and y-coordinates of a
particular geospatial geometry column in a table.
|
||||||||||||||||||||||||
AggregateStatisticsResponse |
aggregateStatistics(AggregateStatisticsRequest request)
Calculates the requested statistics of the given column(s) in a given
table.
|
||||||||||||||||||||||||
AggregateStatisticsResponse |
aggregateStatistics(String tableName,
String columnName,
String stats,
Map<String,String> options)
Calculates the requested statistics of the given column(s) in a given
table.
|
||||||||||||||||||||||||
AggregateStatisticsByRangeResponse |
aggregateStatisticsByRange(AggregateStatisticsByRangeRequest request)
Divides the given set into bins and calculates statistics of the values
of a value-column in each bin.
|
||||||||||||||||||||||||
AggregateStatisticsByRangeResponse |
aggregateStatisticsByRange(String tableName,
String selectExpression,
String columnName,
String valueColumnName,
String stats,
double start,
double end,
double interval,
Map<String,String> options)
Divides the given set into bins and calculates statistics of the values
of a value-column in each bin.
|
||||||||||||||||||||||||
AggregateUniqueResponse |
aggregateUnique(AggregateUniqueRequest request)
Returns all the unique values from a particular column (specified by
columnName ) of a particular table or view (specified by tableName ). |
||||||||||||||||||||||||
AggregateUniqueResponse |
aggregateUnique(String tableName,
String columnName,
long offset,
long limit,
Map<String,String> options)
Returns all the unique values from a particular column (specified by
columnName ) of a particular table or view (specified by tableName ). |
||||||||||||||||||||||||
RawAggregateUniqueResponse |
aggregateUniqueRaw(AggregateUniqueRequest request)
Returns all the unique values from a particular column (specified by
columnName ) of a particular table or view (specified by tableName ). |
||||||||||||||||||||||||
AggregateUnpivotResponse |
aggregateUnpivot(AggregateUnpivotRequest request)
Rotate the column values into rows values.
|
||||||||||||||||||||||||
AggregateUnpivotResponse |
aggregateUnpivot(String tableName,
List<String> columnNames,
String variableColumnName,
String valueColumnName,
List<String> pivotedColumns,
Map<String,String> options)
Rotate the column values into rows values.
|
||||||||||||||||||||||||
RawAggregateUnpivotResponse |
aggregateUnpivotRaw(AggregateUnpivotRequest request)
Rotate the column values into rows values.
|
||||||||||||||||||||||||
AlterCredentialResponse |
alterCredential(AlterCredentialRequest request)
Alter the properties of an existing
AlterCredentialResponse alterCredential(String credentialName,
Map<String,String> credentialUpdatesMap,
Map<String,String> options)
Alter the properties of an existing
AlterDatasinkResponse alterDatasink(AlterDatasinkRequest request)
Alters the properties of an existing
AlterDatasinkResponse alterDatasink(String name,
Map<String,String> datasinkUpdatesMap,
Map<String,String> options)
Alters the properties of an existing
AlterDatasourceResponse alterDatasource(AlterDatasourceRequest request)
Alters the properties of an existing
AlterDatasourceResponse alterDatasource(String name,
Map<String,String> datasourceUpdatesMap,
Map<String,String> options)
Alters the properties of an existing
AlterDirectoryResponse alterDirectory(AlterDirectoryRequest request)
Alters an existing directory in
AlterDirectoryResponse alterDirectory(String directoryName,
Map<String,String> directoryUpdatesMap,
Map<String,String> options)
Alters an existing directory in
AlterEnvironmentResponse alterEnvironment(AlterEnvironmentRequest request)
Alters an existing environment which can be referenced by a
AlterEnvironmentResponse alterEnvironment(String environmentName,
String action,
String value,
Map<String,String> options)
Alters an existing environment which can be referenced by a
AlterGraphResponse alterGraph(AlterGraphRequest request) | ||||||||||||||||||||||||
AlterGraphResponse |
alterGraph(String graphName,
String action,
String actionArg,
Map<String,String> options) |
||||||||||||||||||||||||
AlterModelResponse |
alterModel(AlterModelRequest request) |
||||||||||||||||||||||||
AlterModelResponse |
alterModel(String modelName,
String action,
String value,
Map<String,String> options) |
||||||||||||||||||||||||
AlterResourceGroupResponse |
alterResourceGroup(AlterResourceGroupRequest request)
Alters the properties of an exisiting resource group to facilitate
resource management.
|
||||||||||||||||||||||||
AlterResourceGroupResponse |
alterResourceGroup(String name,
Map<String,Map<String,String>> tierAttributes,
String ranking,
String adjoiningResourceGroup,
Map<String,String> options)
Alters the properties of an exisiting resource group to facilitate
resource management.
|
||||||||||||||||||||||||
AlterRoleResponse |
alterRole(AlterRoleRequest request)
Alters a Role.
|
||||||||||||||||||||||||
AlterRoleResponse |
alterRole(String name,
String action,
String value,
Map<String,String> options)
Alters a Role.
|
||||||||||||||||||||||||
AlterSchemaResponse |
alterSchema(AlterSchemaRequest request)
Used to change the name of a SQL-style
AlterSchemaResponse alterSchema(String schemaName,
String action,
String value,
Map<String,String> options)
Used to change the name of a SQL-style
AlterSystemPropertiesResponse alterSystemProperties(AlterSystemPropertiesRequest request)
The
alterSystemProperties endpoint is primarily used to simplify the
testing of the system and is not expected to be used during normal
execution. | ||||||||||||||||||||||||
AlterSystemPropertiesResponse |
alterSystemProperties(Map<String,String> propertyUpdatesMap,
Map<String,String> options)
The
alterSystemProperties
endpoint is primarily used to simplify the testing of the system and is
not expected to be used during normal execution. |
||||||||||||||||||||||||
AlterTableResponse |
alterTable(AlterTableRequest request)
Apply various modifications to a table or view.
|
||||||||||||||||||||||||
AlterTableResponse |
alterTable(String tableName,
String action,
String value,
Map<String,String> options)
Apply various modifications to a table or view.
|
||||||||||||||||||||||||
AlterTableColumnsResponse |
alterTableColumns(AlterTableColumnsRequest request)
Apply various modifications to columns in a table, view.
|
||||||||||||||||||||||||
AlterTableColumnsResponse |
alterTableColumns(String tableName,
List<Map<String,String>> columnAlterations,
Map<String,String> options)
Apply various modifications to columns in a table, view.
|
||||||||||||||||||||||||
AlterTableMetadataResponse |
alterTableMetadata(AlterTableMetadataRequest request)
Updates (adds or changes) metadata for tables.
|
||||||||||||||||||||||||
AlterTableMetadataResponse |
alterTableMetadata(List<String> tableNames,
Map<String,String> metadataMap,
Map<String,String> options)
Updates (adds or changes) metadata for tables.
|
||||||||||||||||||||||||
AlterTableMonitorResponse |
alterTableMonitor(AlterTableMonitorRequest request)
Alters a table monitor previously created with
createTableMonitor . |
||||||||||||||||||||||||
AlterTableMonitorResponse |
alterTableMonitor(String topicId,
Map<String,String> monitorUpdatesMap,
Map<String,String> options)
Alters a table monitor previously created with
createTableMonitor . |
||||||||||||||||||||||||
AlterTierResponse |
alterTier(AlterTierRequest request)
Alters properties of an exisiting
AlterTierResponse alterTier(String name,
Map<String,String> options)
Alters properties of an exisiting
AlterUserResponse alterUser(AlterUserRequest request)
Alters a user.
| ||||||||||||||||||||||||
AlterUserResponse |
alterUser(String name,
String action,
String value,
Map<String,String> options)
Alters a user.
|
||||||||||||||||||||||||
AlterVideoResponse |
alterVideo(AlterVideoRequest request)
Alters a video.
|
||||||||||||||||||||||||
AlterVideoResponse |
alterVideo(String path,
Map<String,String> options)
Alters a video.
|
||||||||||||||||||||||||
AlterWalResponse |
alterWal(AlterWalRequest request)
Alters table wal settings.
|
||||||||||||||||||||||||
AlterWalResponse |
alterWal(List<String> tableNames,
Map<String,String> options)
Alters table wal settings.
|
||||||||||||||||||||||||
AppendRecordsResponse |
appendRecords(AppendRecordsRequest request)
Append (or insert) all records from a source table (specified by
sourceTableName ) to a particular target table (specified by tableName ). |
||||||||||||||||||||||||
AppendRecordsResponse |
appendRecords(String tableName,
String sourceTableName,
Map<String,String> fieldMap,
Map<String,String> options)
Append (or insert) all records from a source table (specified by
sourceTableName ) to a particular target table (specified by tableName ). |
||||||||||||||||||||||||
ClearStatisticsResponse |
clearStatistics(ClearStatisticsRequest request)
Clears statistics (cardinality, mean value, etc.) for a column in a
specified table.
|
||||||||||||||||||||||||
ClearStatisticsResponse |
clearStatistics(String tableName,
String columnName,
Map<String,String> options)
Clears statistics (cardinality, mean value, etc.) for a column in a
specified table.
|
||||||||||||||||||||||||
ClearTableResponse |
clearTable(ClearTableRequest request)
Clears (drops) one or all tables in the database cluster.
|
||||||||||||||||||||||||
ClearTableResponse |
clearTable(String tableName,
String authorization,
Map<String,String> options)
Clears (drops) one or all tables in the database cluster.
|
||||||||||||||||||||||||
ClearTableMonitorResponse |
clearTableMonitor(ClearTableMonitorRequest request)
Deactivates a table monitor previously created with
createTableMonitor . |
||||||||||||||||||||||||
ClearTableMonitorResponse |
clearTableMonitor(String topicId,
Map<String,String> options)
Deactivates a table monitor previously created with
createTableMonitor . |
||||||||||||||||||||||||
ClearTriggerResponse |
clearTrigger(ClearTriggerRequest request)
Clears or cancels the trigger identified by the specified handle.
|
||||||||||||||||||||||||
ClearTriggerResponse |
clearTrigger(String triggerId,
Map<String,String> options)
Clears or cancels the trigger identified by the specified handle.
|
||||||||||||||||||||||||
CollectStatisticsResponse |
collectStatistics(CollectStatisticsRequest request)
Collect statistics for a column(s) in a specified table.
|
||||||||||||||||||||||||
CollectStatisticsResponse |
collectStatistics(String tableName,
List<String> columnNames,
Map<String,String> options)
Collect statistics for a column(s) in a specified table.
|
||||||||||||||||||||||||
CreateContainerRegistryResponse |
createContainerRegistry(CreateContainerRegistryRequest request) |
||||||||||||||||||||||||
CreateContainerRegistryResponse |
createContainerRegistry(String registryName,
String uri,
String credential,
Map<String,String> options) |
||||||||||||||||||||||||
CreateCredentialResponse |
createCredential(CreateCredentialRequest request)
Create a new
CreateCredentialResponse createCredential(String credentialName,
String type,
String identity,
String secret,
Map<String,String> options)
Create a new
CreateDatasinkResponse createDatasink(CreateDatasinkRequest request)
Creates a
CreateDatasinkResponse createDatasink(String name,
String destination,
Map<String,String> options)
Creates a
CreateDatasourceResponse createDatasource(CreateDatasourceRequest request)
Creates a
CreateDatasourceResponse createDatasource(String name,
String location,
String userName,
String password,
Map<String,String> options)
| ||||||||||||||||||||||||
CreateDeltaTableResponse |
createDeltaTable(String deltaTableName,
String tableName,
Map<String,String> options) |
||||||||||||||||||||||||
CreateDirectoryResponse |
createDirectory(CreateDirectoryRequest request)
Creates a new directory in
CreateDirectoryResponse createDirectory(String directoryName,
Map<String,String> options)
Creates a new directory in
CreateEnvironmentResponse createEnvironment(CreateEnvironmentRequest request)
Creates a new environment which can be used by
CreateEnvironmentResponse createEnvironment(String environmentName,
Map<String,String> options)
Creates a new environment which can be used by
CreateGraphResponse createGraph(CreateGraphRequest request)
Creates a new graph network using given nodes, edges, weights, and
restrictions.
| ||||||||||||||||||||||||
CreateGraphResponse |
createGraph(String graphName,
boolean directedGraph,
List<String> nodes,
List<String> edges,
List<String> weights,
List<String> restrictions,
Map<String,String> options)
Creates a new graph network using given nodes, edges, weights, and
restrictions.
|
||||||||||||||||||||||||
CreateJobResponse |
createJob(CreateJobRequest request)
Create a job which will run asynchronously.
|
||||||||||||||||||||||||
CreateJobResponse |
createJob(String endpoint,
String requestEncoding,
ByteBuffer data,
String dataStr,
Map<String,String> options)
Create a job which will run asynchronously.
|
||||||||||||||||||||||||
CreateJoinTableResponse |
createJoinTable(CreateJoinTableRequest request)
Creates a table that is the result of a SQL JOIN.
|
||||||||||||||||||||||||
CreateJoinTableResponse |
createJoinTable(String joinTableName,
List<String> tableNames,
List<String> columnNames,
List<String> expressions,
Map<String,String> options)
Creates a table that is the result of a SQL JOIN.
|
||||||||||||||||||||||||
CreateMaterializedViewResponse |
createMaterializedView(CreateMaterializedViewRequest request)
Initiates the process of creating a materialized view, reserving the
view's name to prevent other views or tables from being created with
that name.
|
||||||||||||||||||||||||
CreateMaterializedViewResponse |
createMaterializedView(String tableName,
Map<String,String> options)
Initiates the process of creating a materialized view, reserving the
view's name to prevent other views or tables from being created with
that name.
|
||||||||||||||||||||||||
CreateProcResponse |
createProc(CreateProcRequest request)
Creates an instance (proc) of the
CreateProcResponse createProc(String procName,
String executionMode,
Map<String,ByteBuffer> files,
String command,
List<String> args,
Map<String,String> options)
Creates an instance (proc) of the
CreateProjectionResponse createProjection(CreateProjectionRequest request)
Creates a new
CreateProjectionResponse createProjection(String tableName,
String projectionName,
List<String> columnNames,
Map<String,String> options)
Creates a new
CreateResourceGroupResponse createResourceGroup(CreateResourceGroupRequest request)
Creates a new resource group to facilitate resource management.
| ||||||||||||||||||||||||
CreateResourceGroupResponse |
createResourceGroup(String name,
Map<String,Map<String,String>> tierAttributes,
String ranking,
String adjoiningResourceGroup,
Map<String,String> options)
Creates a new resource group to facilitate resource management.
|
||||||||||||||||||||||||
CreateRoleResponse |
createRole(CreateRoleRequest request)
Creates a new role.
|
||||||||||||||||||||||||
CreateRoleResponse |
createRole(String name,
Map<String,String> options)
Creates a new role.
|
||||||||||||||||||||||||
CreateSchemaResponse |
createSchema(CreateSchemaRequest request)
Creates a SQL-style
CreateSchemaResponse createSchema(String schemaName,
Map<String,String> options)
| ||||||||||||||||||||||||
CreateStateTableResponse |
createStateTable(String tableName,
String inputTableName,
String initTableName,
Map<String,String> options) |
||||||||||||||||||||||||
CreateTableResponse |
createTable(CreateTableRequest request)
Creates a new table.
|
||||||||||||||||||||||||
CreateTableResponse |
createTable(String tableName,
String typeId,
Map<String,String> options)
Creates a new table.
|
||||||||||||||||||||||||
CreateTableExternalResponse |
createTableExternal(CreateTableExternalRequest request)
Creates a new
CreateTableExternalResponse createTableExternal(String tableName,
List<String> filepaths,
Map<String,Map<String,String>> modifyColumns,
Map<String,String> createTableOptions,
Map<String,String> options)
Creates a new
CreateTableMonitorResponse createTableMonitor(CreateTableMonitorRequest request)
Creates a monitor that watches for a single table modification event
type (insert, update, or delete) on a particular table (identified by
tableName ) and forwards event notifications to subscribers via ZMQ. | ||||||||||||||||||||||||
CreateTableMonitorResponse |
createTableMonitor(String tableName,
Map<String,String> options)
Creates a monitor that watches for a single table modification event
type (insert, update, or delete) on a particular table (identified by
tableName ) and forwards event notifications to subscribers via
ZMQ. |
||||||||||||||||||||||||
CreateTriggerByAreaResponse |
createTriggerByArea(CreateTriggerByAreaRequest request)
Sets up an area trigger mechanism for two column_names for one or more
tables.
|
||||||||||||||||||||||||
CreateTriggerByAreaResponse |
createTriggerByArea(String requestId,
List<String> tableNames,
String xColumnName,
List<Double> xVector,
String yColumnName,
List<Double> yVector,
Map<String,String> options)
Sets up an area trigger mechanism for two column_names for one or more
tables.
|
||||||||||||||||||||||||
CreateTriggerByRangeResponse |
createTriggerByRange(CreateTriggerByRangeRequest request)
Sets up a simple range trigger for a column_name for one or more tables.
|
||||||||||||||||||||||||
CreateTriggerByRangeResponse |
createTriggerByRange(String requestId,
List<String> tableNames,
String columnName,
double min,
double max,
Map<String,String> options)
Sets up a simple range trigger for a column_name for one or more tables.
|
||||||||||||||||||||||||
CreateTypeResponse |
createType(CreateTypeRequest request)
Creates a new type describing the layout of a table.
|
||||||||||||||||||||||||
CreateTypeResponse |
createType(String typeDefinition,
String label,
Map<String,List<String>> properties,
Map<String,String> options)
Creates a new type describing the layout of a table.
|
||||||||||||||||||||||||
CreateUnionResponse |
createUnion(CreateUnionRequest request)
Merges data from one or more tables with comparable data types into a
new table.
|
||||||||||||||||||||||||
CreateUnionResponse |
createUnion(String tableName,
List<String> tableNames,
List<List<String>> inputColumnNames,
List<String> outputColumnNames,
Map<String,String> options)
Merges data from one or more tables with comparable data types into a
new table.
|
||||||||||||||||||||||||
CreateUserExternalResponse |
createUserExternal(CreateUserExternalRequest request)
Creates a new external user (a user whose credentials are managed by an
external LDAP).
|
||||||||||||||||||||||||
CreateUserExternalResponse |
createUserExternal(String name,
Map<String,String> options)
Creates a new external user (a user whose credentials are managed by an
external LDAP).
|
||||||||||||||||||||||||
CreateUserInternalResponse |
createUserInternal(CreateUserInternalRequest request)
Creates a new internal user (a user whose credentials are managed by the
database system).
|
||||||||||||||||||||||||
CreateUserInternalResponse |
createUserInternal(String name,
String password,
Map<String,String> options)
Creates a new internal user (a user whose credentials are managed by the
database system).
|
||||||||||||||||||||||||
CreateVideoResponse |
createVideo(CreateVideoRequest request)
Creates a job to generate a sequence of raster images that visualize
data over a specified time.
|
||||||||||||||||||||||||
CreateVideoResponse |
createVideo(String attribute,
String begin,
double durationSeconds,
String end,
double framesPerSecond,
String style,
String path,
String styleParameters,
Map<String,String> options)
Creates a job to generate a sequence of raster images that visualize
data over a specified time.
|
||||||||||||||||||||||||
DeleteDirectoryResponse |
deleteDirectory(DeleteDirectoryRequest request)
Deletes a directory from
DeleteDirectoryResponse deleteDirectory(String directoryName,
Map<String,String> options)
Deletes a directory from
DeleteFilesResponse deleteFiles(DeleteFilesRequest request)
Deletes one or more files from
DeleteFilesResponse deleteFiles(List<String> fileNames,
Map<String,String> options)
Deletes one or more files from
DeleteGraphResponse deleteGraph(DeleteGraphRequest request)
Deletes an existing graph from the graph server and/or persist.
| ||||||||||||||||||||||||
DeleteGraphResponse |
deleteGraph(String graphName,
Map<String,String> options)
Deletes an existing graph from the graph server and/or persist.
|
||||||||||||||||||||||||
DeleteProcResponse |
deleteProc(DeleteProcRequest request)
Deletes a proc.
|
||||||||||||||||||||||||
DeleteProcResponse |
deleteProc(String procName,
Map<String,String> options)
Deletes a proc.
|
||||||||||||||||||||||||
DeleteRecordsResponse |
deleteRecords(DeleteRecordsRequest request)
Deletes record(s) matching the provided criteria from the given table.
|
||||||||||||||||||||||||
DeleteRecordsResponse |
deleteRecords(String tableName,
List<String> expressions,
Map<String,String> options)
Deletes record(s) matching the provided criteria from the given table.
|
||||||||||||||||||||||||
DeleteResourceGroupResponse |
deleteResourceGroup(DeleteResourceGroupRequest request)
Deletes a resource group.
|
||||||||||||||||||||||||
DeleteResourceGroupResponse |
deleteResourceGroup(String name,
Map<String,String> options)
Deletes a resource group.
|
||||||||||||||||||||||||
DeleteRoleResponse |
deleteRole(DeleteRoleRequest request)
Deletes an existing role.
|
||||||||||||||||||||||||
DeleteRoleResponse |
deleteRole(String name,
Map<String,String> options)
Deletes an existing role.
|
||||||||||||||||||||||||
DeleteUserResponse |
deleteUser(DeleteUserRequest request)
Deletes an existing user.
|
||||||||||||||||||||||||
DeleteUserResponse |
deleteUser(String name,
Map<String,String> options)
Deletes an existing user.
|
||||||||||||||||||||||||
DownloadFilesResponse |
downloadFiles(DownloadFilesRequest request)
Downloads one or more files from
DownloadFilesResponse downloadFiles(List<String> fileNames,
List<Long> readOffsets,
List<Long> readLengths,
Map<String,String> options)
Downloads one or more files from
DropContainerRegistryResponse dropContainerRegistry(DropContainerRegistryRequest request) | ||||||||||||||||||||||||
DropContainerRegistryResponse |
dropContainerRegistry(String registryName,
Map<String,String> options) |
||||||||||||||||||||||||
DropCredentialResponse |
dropCredential(DropCredentialRequest request)
Drop an existing
DropCredentialResponse dropCredential(String credentialName,
Map<String,String> options)
Drop an existing
DropDatasinkResponse dropDatasink(DropDatasinkRequest request)
Drops an existing
DropDatasinkResponse dropDatasink(String name,
Map<String,String> options)
Drops an existing
DropDatasourceResponse dropDatasource(DropDatasourceRequest request)
Drops an existing
DropDatasourceResponse dropDatasource(String name,
Map<String,String> options)
Drops an existing
DropEnvironmentResponse dropEnvironment(DropEnvironmentRequest request)
Drop an existing
DropEnvironmentResponse dropEnvironment(String environmentName,
Map<String,String> options)
| ||||||||||||||||||||||||
DropModelResponse |
dropModel(String modelName,
Map<String,String> options) |
||||||||||||||||||||||||
DropSchemaResponse |
dropSchema(DropSchemaRequest request)
Drops an existing SQL-style
DropSchemaResponse dropSchema(String schemaName,
Map<String,String> options)
| ||||||||||||||||||||||||
EvaluateModelResponse |
evaluateModel(String modelName,
int replicas,
String deploymentMode,
String sourceTable,
String destinationTable,
Map<String,String> options) |
||||||||||||||||||||||||
ExecuteProcResponse |
executeProc(ExecuteProcRequest request)
Executes a proc.
|
||||||||||||||||||||||||
ExecuteProcResponse |
executeProc(String procName,
Map<String,String> params,
Map<String,ByteBuffer> binParams,
List<String> inputTableNames,
Map<String,List<String>> inputColumnNames,
List<String> outputTableNames,
Map<String,String> options)
Executes a proc.
|
||||||||||||||||||||||||
ExecuteSqlResponse |
executeSql(ExecuteSqlRequest request)
Execute a SQL statement (query, DML, or DDL).
|
||||||||||||||||||||||||
ExecuteSqlResponse |
executeSql(String statement,
long offset,
long limit,
String requestSchemaStr,
List<ByteBuffer> data,
Map<String,String> options)
Execute a SQL statement (query, DML, or DDL).
|
||||||||||||||||||||||||
RawExecuteSqlResponse |
executeSqlRaw(ExecuteSqlRequest request)
Execute a SQL statement (query, DML, or DDL).
|
||||||||||||||||||||||||
ExportQueryMetricsResponse |
exportQueryMetrics(ExportQueryMetricsRequest request)
Export query metrics to a given destination.
|
||||||||||||||||||||||||
ExportQueryMetricsResponse |
exportQueryMetrics(Map<String,String> options)
Export query metrics to a given destination.
|
||||||||||||||||||||||||
ExportRecordsToFilesResponse |
exportRecordsToFiles(ExportRecordsToFilesRequest request)
Export records from a table to files.
|
||||||||||||||||||||||||
ExportRecordsToFilesResponse |
exportRecordsToFiles(String tableName,
String filepath,
Map<String,String> options)
Export records from a table to files.
|
||||||||||||||||||||||||
ExportRecordsToTableResponse |
exportRecordsToTable(ExportRecordsToTableRequest request)
Exports records from source table to the specified target table in an
external database
|
||||||||||||||||||||||||
ExportRecordsToTableResponse |
exportRecordsToTable(String tableName,
String remoteQuery,
Map<String,String> options)
Exports records from source table to the specified target table in an
external database
|
||||||||||||||||||||||||
FilterResponse |
filter(FilterRequest request)
Filters data based on the specified expression.
|
||||||||||||||||||||||||
FilterResponse |
filter(String tableName,
String viewName,
String expression,
Map<String,String> options)
Filters data based on the specified expression.
|
||||||||||||||||||||||||
FilterByAreaResponse |
filterByArea(FilterByAreaRequest request)
Calculates which objects from a table are within a named area of
interest (NAI/polygon).
|
||||||||||||||||||||||||
FilterByAreaResponse |
filterByArea(String tableName,
String viewName,
String xColumnName,
List<Double> xVector,
String yColumnName,
List<Double> yVector,
Map<String,String> options)
Calculates which objects from a table are within a named area of
interest (NAI/polygon).
|
||||||||||||||||||||||||
FilterByAreaGeometryResponse |
filterByAreaGeometry(FilterByAreaGeometryRequest request)
Calculates which geospatial geometry objects from a table intersect a
named area of interest (NAI/polygon).
|
||||||||||||||||||||||||
FilterByAreaGeometryResponse |
filterByAreaGeometry(String tableName,
String viewName,
String columnName,
List<Double> xVector,
List<Double> yVector,
Map<String,String> options)
Calculates which geospatial geometry objects from a table intersect a
named area of interest (NAI/polygon).
|
||||||||||||||||||||||||
FilterByBoxResponse |
filterByBox(FilterByBoxRequest request)
Calculates how many objects within the given table lie in a rectangular
box.
|
||||||||||||||||||||||||
FilterByBoxResponse |
filterByBox(String tableName,
String viewName,
String xColumnName,
double minX,
double maxX,
String yColumnName,
double minY,
double maxY,
Map<String,String> options)
Calculates how many objects within the given table lie in a rectangular
box.
|
||||||||||||||||||||||||
FilterByBoxGeometryResponse |
filterByBoxGeometry(FilterByBoxGeometryRequest request)
Calculates which geospatial geometry objects from a table intersect a
rectangular box.
|
||||||||||||||||||||||||
FilterByBoxGeometryResponse |
filterByBoxGeometry(String tableName,
String viewName,
String columnName,
double minX,
double maxX,
double minY,
double maxY,
Map<String,String> options)
Calculates which geospatial geometry objects from a table intersect a
rectangular box.
|
||||||||||||||||||||||||
FilterByGeometryResponse |
filterByGeometry(FilterByGeometryRequest request)
Applies a geometry filter against a geospatial geometry column in a
given table or view.
|
||||||||||||||||||||||||
FilterByGeometryResponse |
filterByGeometry(String tableName,
String viewName,
String columnName,
String inputWkt,
String operation,
Map<String,String> options)
Applies a geometry filter against a geospatial geometry column in a
given table or view.
|
||||||||||||||||||||||||
FilterByListResponse |
filterByList(FilterByListRequest request)
Calculates which records from a table have values in the given list for
the corresponding column.
|
||||||||||||||||||||||||
FilterByListResponse |
filterByList(String tableName,
String viewName,
Map<String,List<String>> columnValuesMap,
Map<String,String> options)
Calculates which records from a table have values in the given list for
the corresponding column.
|
||||||||||||||||||||||||
FilterByRadiusResponse |
filterByRadius(FilterByRadiusRequest request)
Calculates which objects from a table lie within a circle with the given
radius and center point (i.e.
|
||||||||||||||||||||||||
FilterByRadiusResponse |
filterByRadius(String tableName,
String viewName,
String xColumnName,
double xCenter,
String yColumnName,
double yCenter,
double radius,
Map<String,String> options)
Calculates which objects from a table lie within a circle with the given
radius and center point (i.e.
|
||||||||||||||||||||||||
FilterByRadiusGeometryResponse |
filterByRadiusGeometry(FilterByRadiusGeometryRequest request)
Calculates which geospatial geometry objects from a table intersect a
circle with the given radius and center point (i.e.
|
||||||||||||||||||||||||
FilterByRadiusGeometryResponse |
filterByRadiusGeometry(String tableName,
String viewName,
String columnName,
double xCenter,
double yCenter,
double radius,
Map<String,String> options)
Calculates which geospatial geometry objects from a table intersect a
circle with the given radius and center point (i.e.
|
||||||||||||||||||||||||
FilterByRangeResponse |
filterByRange(FilterByRangeRequest request)
Calculates which objects from a table have a column that is within the
given bounds.
|
||||||||||||||||||||||||
FilterByRangeResponse |
filterByRange(String tableName,
String viewName,
String columnName,
double lowerBound,
double upperBound,
Map<String,String> options)
Calculates which objects from a table have a column that is within the
given bounds.
|
||||||||||||||||||||||||
FilterBySeriesResponse |
filterBySeries(FilterBySeriesRequest request)
Filters objects matching all points of the given track (works only on
track type data).
|
||||||||||||||||||||||||
FilterBySeriesResponse |
filterBySeries(String tableName,
String viewName,
String trackId,
List<String> targetTrackIds,
Map<String,String> options)
Filters objects matching all points of the given track (works only on
track type data).
|
||||||||||||||||||||||||
FilterByStringResponse |
filterByString(FilterByStringRequest request)
Calculates which objects from a table or view match a string expression
for the given string columns.
|
||||||||||||||||||||||||
FilterByStringResponse |
filterByString(String tableName,
String viewName,
String expression,
String mode,
List<String> columnNames,
Map<String,String> options)
Calculates which objects from a table or view match a string expression
for the given string columns.
|
||||||||||||||||||||||||
FilterByTableResponse |
filterByTable(FilterByTableRequest request)
Filters objects in one table based on objects in another table.
|
||||||||||||||||||||||||
FilterByTableResponse |
filterByTable(String tableName,
String viewName,
String columnName,
String sourceTableName,
String sourceTableColumnName,
Map<String,String> options)
Filters objects in one table based on objects in another table.
|
||||||||||||||||||||||||
FilterByValueResponse |
filterByValue(FilterByValueRequest request)
Calculates which objects from a table has a particular value for a
particular column.
|
||||||||||||||||||||||||
FilterByValueResponse |
filterByValue(String tableName,
String viewName,
boolean isString,
double value,
String valueStr,
String columnName,
Map<String,String> options)
Calculates which objects from a table has a particular value for a
particular column.
|
||||||||||||||||||||||||
GetJobResponse |
getJob(GetJobRequest request)
Get the status and result of asynchronously running job.
|
||||||||||||||||||||||||
GetJobResponse |
getJob(long jobId,
Map<String,String> options)
Get the status and result of asynchronously running job.
|
||||||||||||||||||||||||
<TResponse> |
getRecords(GetRecordsRequest request)
Retrieves records from a given table, optionally filtered by an
expression and/or sorted by a column.
|
||||||||||||||||||||||||
<TResponse> |
getRecords(Object typeDescriptor,
GetRecordsRequest request)
Retrieves records from a given table, optionally filtered by an
expression and/or sorted by a column.
|
||||||||||||||||||||||||
<TResponse> |
getRecords(Object typeDescriptor,
String tableName,
long offset,
long limit,
Map<String,String> options)
Retrieves records from a given table, optionally filtered by an
expression and/or sorted by a column.
|
||||||||||||||||||||||||
<TResponse> |
getRecords(String tableName,
long offset,
long limit,
Map<String,String> options)
Retrieves records from a given table, optionally filtered by an
expression and/or sorted by a column.
|
||||||||||||||||||||||||
GetRecordsByColumnResponse |
getRecordsByColumn(GetRecordsByColumnRequest request)
For a given table, retrieves the values from the requested column(s).
|
||||||||||||||||||||||||
GetRecordsByColumnResponse |
getRecordsByColumn(String tableName,
List<String> columnNames,
long offset,
long limit,
Map<String,String> options)
For a given table, retrieves the values from the requested column(s).
|
||||||||||||||||||||||||
RawGetRecordsByColumnResponse |
getRecordsByColumnRaw(GetRecordsByColumnRequest request)
For a given table, retrieves the values from the requested column(s).
|
||||||||||||||||||||||||
<TResponse> |
getRecordsBySeries(GetRecordsBySeriesRequest request)
Retrieves the complete series/track records from the given
worldTableName based on the partial track information contained in the
tableName . |
||||||||||||||||||||||||
<TResponse> |
getRecordsBySeries(Object typeDescriptor,
GetRecordsBySeriesRequest request)
Retrieves the complete series/track records from the given
worldTableName based on the partial track information contained in the
tableName . |
||||||||||||||||||||||||
<TResponse> |
getRecordsBySeries(Object typeDescriptor,
String tableName,
String worldTableName,
int offset,
int limit,
Map<String,String> options)
Retrieves the complete series/track records from the given
worldTableName based on the partial track information contained in the
tableName . |
||||||||||||||||||||||||
<TResponse> |
getRecordsBySeries(String tableName,
String worldTableName,
int offset,
int limit,
Map<String,String> options)
Retrieves the complete series/track records from the given
worldTableName based on the partial track information contained in the
tableName . |
||||||||||||||||||||||||
RawGetRecordsBySeriesResponse |
getRecordsBySeriesRaw(GetRecordsBySeriesRequest request)
Retrieves the complete series/track records from the given
worldTableName based on the partial track information contained in the
tableName . |
||||||||||||||||||||||||
<TResponse> |
getRecordsFromCollection(GetRecordsFromCollectionRequest request)
Retrieves records from a collection.
|
||||||||||||||||||||||||
<TResponse> |
getRecordsFromCollection(Object typeDescriptor,
GetRecordsFromCollectionRequest request)
Retrieves records from a collection.
|
||||||||||||||||||||||||
<TResponse> |
getRecordsFromCollection(Object typeDescriptor,
String tableName,
long offset,
long limit,
Map<String,String> options)
Retrieves records from a collection.
|
||||||||||||||||||||||||
<TResponse> |
getRecordsFromCollection(String tableName,
long offset,
long limit,
Map<String,String> options)
Retrieves records from a collection.
|
||||||||||||||||||||||||
RawGetRecordsFromCollectionResponse |
getRecordsFromCollectionRaw(GetRecordsFromCollectionRequest request)
Retrieves records from a collection.
|
||||||||||||||||||||||||
RawGetRecordsResponse |
getRecordsRaw(GetRecordsRequest request)
Retrieves records from a given table, optionally filtered by an
expression and/or sorted by a column.
|
||||||||||||||||||||||||
GetVectortileResponse |
getVectortile(GetVectortileRequest request) |
||||||||||||||||||||||||
GetVectortileResponse |
getVectortile(List<String> tableNames,
List<String> columnNames,
Map<String,List<String>> layers,
int tileX,
int tileY,
int zoom,
Map<String,String> options) |
||||||||||||||||||||||||
GrantPermissionResponse |
grantPermission(GrantPermissionRequest request)
Grant user or role the specified permission on the specified object.
|
||||||||||||||||||||||||
GrantPermissionResponse |
grantPermission(String principal,
String object,
String objectType,
String permission,
Map<String,String> options)
Grant user or role the specified permission on the specified object.
|
||||||||||||||||||||||||
GrantPermissionCredentialResponse |
grantPermissionCredential(GrantPermissionCredentialRequest request)
Grants a
GrantPermissionCredentialResponse grantPermissionCredential(String name,
String permission,
String credentialName,
Map<String,String> options)
Grants a
GrantPermissionDatasourceResponse grantPermissionDatasource(GrantPermissionDatasourceRequest request)
Grants a
GrantPermissionDatasourceResponse grantPermissionDatasource(String name,
String permission,
String datasourceName,
Map<String,String> options)
Grants a
GrantPermissionDirectoryResponse grantPermissionDirectory(GrantPermissionDirectoryRequest request)
Grants a
GrantPermissionDirectoryResponse grantPermissionDirectory(String name,
String permission,
String directoryName,
Map<String,String> options)
Grants a
GrantPermissionProcResponse grantPermissionProc(GrantPermissionProcRequest request)
Grants a proc-level permission to a user or role.
| ||||||||||||||||||||||||
GrantPermissionProcResponse |
grantPermissionProc(String name,
String permission,
String procName,
Map<String,String> options)
Grants a proc-level permission to a user or role.
|
||||||||||||||||||||||||
GrantPermissionSystemResponse |
grantPermissionSystem(GrantPermissionSystemRequest request)
Grants a system-level permission to a user or role.
|
||||||||||||||||||||||||
GrantPermissionSystemResponse |
grantPermissionSystem(String name,
String permission,
Map<String,String> options)
Grants a system-level permission to a user or role.
|
||||||||||||||||||||||||
GrantPermissionTableResponse |
grantPermissionTable(GrantPermissionTableRequest request)
Grants a table-level permission to a user or role.
|
||||||||||||||||||||||||
GrantPermissionTableResponse |
grantPermissionTable(String name,
String permission,
String tableName,
String filterExpression,
Map<String,String> options)
Grants a table-level permission to a user or role.
|
||||||||||||||||||||||||
GrantRoleResponse |
grantRole(GrantRoleRequest request)
Grants membership in a role to a user or role.
|
||||||||||||||||||||||||
GrantRoleResponse |
grantRole(String role,
String member,
Map<String,String> options)
Grants membership in a role to a user or role.
|
||||||||||||||||||||||||
HasPermissionResponse |
hasPermission(HasPermissionRequest request)
Checks if the specified user has the specified permission on the
specified object.
|
||||||||||||||||||||||||
HasPermissionResponse |
hasPermission(String principal,
String object,
String objectType,
String permission,
Map<String,String> options)
Checks if the specified user has the specified permission on the
specified object.
|
||||||||||||||||||||||||
HasProcResponse |
hasProc(HasProcRequest request)
Checks the existence of a proc with the given name.
|
||||||||||||||||||||||||
HasProcResponse |
hasProc(String procName,
Map<String,String> options)
Checks the existence of a proc with the given name.
|
||||||||||||||||||||||||
HasRoleResponse |
hasRole(HasRoleRequest request)
Checks if the specified user has the specified role.
|
||||||||||||||||||||||||
HasRoleResponse |
hasRole(String principal,
String role,
Map<String,String> options)
Checks if the specified user has the specified role.
|
||||||||||||||||||||||||
HasSchemaResponse |
hasSchema(HasSchemaRequest request)
Checks for the existence of a schema with the given name.
|
||||||||||||||||||||||||
HasSchemaResponse |
hasSchema(String schemaName,
Map<String,String> options)
Checks for the existence of a schema with the given name.
|
||||||||||||||||||||||||
HasTableResponse |
hasTable(HasTableRequest request)
Checks for the existence of a table with the given name.
|
||||||||||||||||||||||||
HasTableResponse |
hasTable(String tableName,
Map<String,String> options)
Checks for the existence of a table with the given name.
|
||||||||||||||||||||||||
HasTypeResponse |
hasType(HasTypeRequest request)
Check for the existence of a type.
|
||||||||||||||||||||||||
HasTypeResponse |
hasType(String typeId,
Map<String,String> options)
Check for the existence of a type.
|
||||||||||||||||||||||||
ImportModelResponse |
importModel(ImportModelRequest request) |
||||||||||||||||||||||||
ImportModelResponse |
importModel(String modelName,
String registryName,
String container,
String runFunction,
String modelType,
Map<String,String> options) |
||||||||||||||||||||||||
<TRequest> InsertRecordsResponse |
insertRecords(InsertRecordsRequest<TRequest> request)
Adds multiple records to the specified table.
|
||||||||||||||||||||||||
<TRequest> InsertRecordsResponse |
insertRecords(String tableName,
List<TRequest> data,
Map<String,String> options)
Adds multiple records to the specified table.
|
||||||||||||||||||||||||
<TRequest> InsertRecordsResponse |
insertRecords(TypeObjectMap<TRequest> typeObjectMap,
InsertRecordsRequest<TRequest> request)
Adds multiple records to the specified table.
|
||||||||||||||||||||||||
<TRequest> InsertRecordsResponse |
insertRecords(TypeObjectMap<TRequest> typeObjectMap,
String tableName,
List<TRequest> data,
Map<String,String> options)
Adds multiple records to the specified table.
|
||||||||||||||||||||||||
InsertRecordsFromFilesResponse |
insertRecordsFromFiles(InsertRecordsFromFilesRequest request)
Reads from one or more files and inserts the data into a new or existing
table.
|
||||||||||||||||||||||||
InsertRecordsFromFilesResponse |
insertRecordsFromFiles(String tableName,
List<String> filepaths,
Map<String,Map<String,String>> modifyColumns,
Map<String,String> createTableOptions,
Map<String,String> options)
Reads from one or more files and inserts the data into a new or existing
table.
|
||||||||||||||||||||||||
InsertRecordsFromPayloadResponse |
insertRecordsFromPayload(InsertRecordsFromPayloadRequest request)
Reads from the given text-based or binary payload and inserts the data
into a new or existing table.
|
||||||||||||||||||||||||
InsertRecordsFromPayloadResponse |
insertRecordsFromPayload(String tableName,
String dataText,
ByteBuffer dataBytes,
Map<String,Map<String,String>> modifyColumns,
Map<String,String> createTableOptions,
Map<String,String> options)
Reads from the given text-based or binary payload and inserts the data
into a new or existing table.
|
||||||||||||||||||||||||
InsertRecordsFromQueryResponse |
insertRecordsFromQuery(InsertRecordsFromQueryRequest request)
Computes remote query result and inserts the result data into a new or
existing table
|
||||||||||||||||||||||||
InsertRecordsFromQueryResponse |
insertRecordsFromQuery(String tableName,
String remoteQuery,
Map<String,Map<String,String>> modifyColumns,
Map<String,String> createTableOptions,
Map<String,String> options)
Computes remote query result and inserts the result data into a new or
existing table
|
||||||||||||||||||||||||
InsertRecordsRandomResponse |
insertRecordsRandom(InsertRecordsRandomRequest request)
Generates a specified number of random records and adds them to the
given table.
|
||||||||||||||||||||||||
InsertRecordsRandomResponse |
insertRecordsRandom(String tableName,
long count,
Map<String,Map<String,Double>> options)
Generates a specified number of random records and adds them to the
given table.
|
||||||||||||||||||||||||
InsertRecordsResponse |
insertRecordsRaw(RawInsertRecordsRequest request)
Adds multiple records to the specified table.
|
||||||||||||||||||||||||
InsertSymbolResponse |
insertSymbol(InsertSymbolRequest request)
Adds a symbol or icon (i.e.
|
||||||||||||||||||||||||
InsertSymbolResponse |
insertSymbol(String symbolId,
String symbolFormat,
ByteBuffer symbolData,
Map<String,String> options)
Adds a symbol or icon (i.e.
|
||||||||||||||||||||||||
KillProcResponse |
killProc(KillProcRequest request)
Kills a running proc instance.
|
||||||||||||||||||||||||
KillProcResponse |
killProc(String runId,
Map<String,String> options)
Kills a running proc instance.
|
||||||||||||||||||||||||
ListGraphResponse |
listGraph(ListGraphRequest request) |
||||||||||||||||||||||||
ListGraphResponse |
listGraph(String graphName,
Map<String,String> options) |
||||||||||||||||||||||||
LockTableResponse |
lockTable(LockTableRequest request)
Manages global access to a table's data.
|
||||||||||||||||||||||||
LockTableResponse |
lockTable(String tableName,
String lockType,
Map<String,String> options)
Manages global access to a table's data.
|
||||||||||||||||||||||||
MatchGraphResponse |
matchGraph(MatchGraphRequest request)
Matches a directed route implied by a given set of latitude/longitude
points to an existing underlying road network graph using a given
solution type.
|
||||||||||||||||||||||||
MatchGraphResponse |
matchGraph(String graphName,
List<String> samplePoints,
String solveMethod,
String solutionTable,
Map<String,String> options)
Matches a directed route implied by a given set of latitude/longitude
points to an existing underlying road network graph using a given
solution type.
|
||||||||||||||||||||||||
MergeRecordsResponse |
mergeRecords(MergeRecordsRequest request)
Create a new empty result table (specified by
tableName ), and
insert all records from source tables (specified by sourceTableNames ) based on the field mapping information (specified by
fieldMaps ). |
||||||||||||||||||||||||
MergeRecordsResponse |
mergeRecords(String tableName,
List<String> sourceTableNames,
List<Map<String,String>> fieldMaps,
Map<String,String> options)
Create a new empty result table (specified by
tableName ), and
insert all records from source tables (specified by sourceTableNames ) based on the field mapping information (specified by
fieldMaps ). |
||||||||||||||||||||||||
ModifyGraphResponse |
modifyGraph(ModifyGraphRequest request)
Update an existing graph network using given nodes, edges, weights,
restrictions, and options.
|
||||||||||||||||||||||||
ModifyGraphResponse |
modifyGraph(String graphName,
List<String> nodes,
List<String> edges,
List<String> weights,
List<String> restrictions,
Map<String,String> options)
Update an existing graph network using given nodes, edges, weights,
restrictions, and options.
|
||||||||||||||||||||||||
QueryGraphResponse |
queryGraph(QueryGraphRequest request)
Employs a topological query on a graph generated a-priori by
createGraph and returns a list of
adjacent edge(s) or node(s), also known as an adjacency list, depending
on what's been provided to the endpoint; providing edges will return
nodes and providing nodes will return edges. |
||||||||||||||||||||||||
QueryGraphResponse |
queryGraph(String graphName,
List<String> queries,
List<String> restrictions,
String adjacencyTable,
int rings,
Map<String,String> options)
Employs a topological query on a graph generated a-priori by
createGraph and returns a list of adjacent edge(s) or node(s), also
known as an adjacency list, depending on what's been provided to the
endpoint; providing edges will return nodes and providing nodes will
return edges. |
||||||||||||||||||||||||
RepartitionGraphResponse |
repartitionGraph(RepartitionGraphRequest request)
Rebalances an existing partitioned graph.
|
||||||||||||||||||||||||
RepartitionGraphResponse |
repartitionGraph(String graphName,
Map<String,String> options)
Rebalances an existing partitioned graph.
|
||||||||||||||||||||||||
ReserveResourceResponse |
reserveResource(ReserveResourceRequest request) |
||||||||||||||||||||||||
ReserveResourceResponse |
reserveResource(String component,
String name,
String action,
long bytesRequested,
long ownerId,
Map<String,String> options) |
||||||||||||||||||||||||
RevokePermissionResponse |
revokePermission(RevokePermissionRequest request)
Revoke user or role the specified permission on the specified object.
|
||||||||||||||||||||||||
RevokePermissionResponse |
revokePermission(String principal,
String object,
String objectType,
String permission,
Map<String,String> options)
Revoke user or role the specified permission on the specified object.
|
||||||||||||||||||||||||
RevokePermissionCredentialResponse |
revokePermissionCredential(RevokePermissionCredentialRequest request)
Revokes a
RevokePermissionCredentialResponse revokePermissionCredential(String name,
String permission,
String credentialName,
Map<String,String> options)
Revokes a
RevokePermissionDatasourceResponse revokePermissionDatasource(RevokePermissionDatasourceRequest request)
Revokes a
RevokePermissionDatasourceResponse revokePermissionDatasource(String name,
String permission,
String datasourceName,
Map<String,String> options)
Revokes a
RevokePermissionDirectoryResponse revokePermissionDirectory(RevokePermissionDirectoryRequest request)
Revokes a
RevokePermissionDirectoryResponse revokePermissionDirectory(String name,
String permission,
String directoryName,
Map<String,String> options)
Revokes a
RevokePermissionProcResponse revokePermissionProc(RevokePermissionProcRequest request)
Revokes a proc-level permission from a user or role.
| ||||||||||||||||||||||||
RevokePermissionProcResponse |
revokePermissionProc(String name,
String permission,
String procName,
Map<String,String> options)
Revokes a proc-level permission from a user or role.
|
||||||||||||||||||||||||
RevokePermissionSystemResponse |
revokePermissionSystem(RevokePermissionSystemRequest request)
Revokes a system-level permission from a user or role.
|
||||||||||||||||||||||||
RevokePermissionSystemResponse |
revokePermissionSystem(String name,
String permission,
Map<String,String> options)
Revokes a system-level permission from a user or role.
|
||||||||||||||||||||||||
RevokePermissionTableResponse |
revokePermissionTable(RevokePermissionTableRequest request)
Revokes a table-level permission from a user or role.
|
||||||||||||||||||||||||
RevokePermissionTableResponse |
revokePermissionTable(String name,
String permission,
String tableName,
Map<String,String> options)
Revokes a table-level permission from a user or role.
|
||||||||||||||||||||||||
RevokeRoleResponse |
revokeRole(RevokeRoleRequest request)
Revokes membership in a role from a user or role.
|
||||||||||||||||||||||||
RevokeRoleResponse |
revokeRole(String role,
String member,
Map<String,String> options)
Revokes membership in a role from a user or role.
|
||||||||||||||||||||||||
ShowContainerRegistryResponse |
showContainerRegistry(ShowContainerRegistryRequest request) |
||||||||||||||||||||||||
ShowContainerRegistryResponse |
showContainerRegistry(String registryName,
Map<String,String> options) |
||||||||||||||||||||||||
ShowCredentialResponse |
showCredential(ShowCredentialRequest request)
Shows information about a specified
ShowCredentialResponse showCredential(String credentialName,
Map<String,String> options)
Shows information about a specified
ShowDatasinkResponse showDatasink(ShowDatasinkRequest request)
Shows information about a specified
ShowDatasinkResponse showDatasink(String name,
Map<String,String> options)
Shows information about a specified
ShowDatasourceResponse showDatasource(ShowDatasourceRequest request)
Shows information about a specified
ShowDatasourceResponse showDatasource(String name,
Map<String,String> options)
Shows information about a specified
ShowDirectoriesResponse showDirectories(ShowDirectoriesRequest request)
Shows information about directories in
ShowDirectoriesResponse showDirectories(String directoryName,
Map<String,String> options)
Shows information about directories in
ShowEnvironmentResponse showEnvironment(ShowEnvironmentRequest request)
Shows information about a specified
ShowEnvironmentResponse showEnvironment(String environmentName,
Map<String,String> options)
Shows information about a specified
ShowFilesResponse showFiles(List<String> paths,
Map<String,String> options)
Shows information about files in
ShowFilesResponse showFiles(ShowFilesRequest request)
Shows information about files in
ShowFunctionsResponse showFunctions(List<String> names,
Map<String,String> options) | ||||||||||||||||||||||||
ShowFunctionsResponse |
showFunctions(ShowFunctionsRequest request) |
||||||||||||||||||||||||
ShowGraphResponse |
showGraph(ShowGraphRequest request)
Shows information and characteristics of graphs that exist on the graph
server.
|
||||||||||||||||||||||||
ShowGraphResponse |
showGraph(String graphName,
Map<String,String> options)
Shows information and characteristics of graphs that exist on the graph
server.
|
||||||||||||||||||||||||
ShowGraphGrammarResponse |
showGraphGrammar(Map<String,String> options) |
||||||||||||||||||||||||
ShowGraphGrammarResponse |
showGraphGrammar(ShowGraphGrammarRequest request) |
||||||||||||||||||||||||
ShowModelResponse |
showModel(List<String> modelNames,
Map<String,String> options) |
||||||||||||||||||||||||
ShowModelResponse |
showModel(ShowModelRequest request) |
||||||||||||||||||||||||
ShowProcResponse |
showProc(ShowProcRequest request)
Shows information about a proc.
|
||||||||||||||||||||||||
ShowProcResponse |
showProc(String procName,
Map<String,String> options)
Shows information about a proc.
|
||||||||||||||||||||||||
ShowProcStatusResponse |
showProcStatus(ShowProcStatusRequest request)
Shows the statuses of running or completed proc instances.
|
||||||||||||||||||||||||
ShowProcStatusResponse |
showProcStatus(String runId,
Map<String,String> options)
Shows the statuses of running or completed proc instances.
|
||||||||||||||||||||||||
ShowResourceGroupsResponse |
showResourceGroups(List<String> names,
Map<String,String> options)
Requests resource group properties.
|
||||||||||||||||||||||||
ShowResourceGroupsResponse |
showResourceGroups(ShowResourceGroupsRequest request)
Requests resource group properties.
|
||||||||||||||||||||||||
ShowResourceObjectsResponse |
showResourceObjects(Map<String,String> options)
Returns information about the internal sub-components (tiered objects)
which use resources of the system.
|
||||||||||||||||||||||||
ShowResourceObjectsResponse |
showResourceObjects(ShowResourceObjectsRequest request)
Returns information about the internal sub-components (tiered objects)
which use resources of the system.
|
||||||||||||||||||||||||
ShowResourceStatisticsResponse |
showResourceStatistics(Map<String,String> options)
Requests various statistics for storage/memory tiers and resource
groups.
|
||||||||||||||||||||||||
ShowResourceStatisticsResponse |
showResourceStatistics(ShowResourceStatisticsRequest request)
Requests various statistics for storage/memory tiers and resource
groups.
|
||||||||||||||||||||||||
ShowSchemaResponse |
showSchema(ShowSchemaRequest request)
Retrieves information about a
ShowSchemaResponse showSchema(String schemaName,
Map<String,String> options)
Retrieves information about a
ShowSecurityResponse showSecurity(List<String> names,
Map<String,String> options)
Shows security information relating to users and/or roles.
| ||||||||||||||||||||||||
ShowSecurityResponse |
showSecurity(ShowSecurityRequest request)
Shows security information relating to users and/or roles.
|
||||||||||||||||||||||||
ShowSqlProcResponse |
showSqlProc(ShowSqlProcRequest request)
Shows information about SQL procedures, including the full definition of
each requested procedure.
|
||||||||||||||||||||||||
ShowSqlProcResponse |
showSqlProc(String procedureName,
Map<String,String> options)
Shows information about SQL procedures, including the full definition of
each requested procedure.
|
||||||||||||||||||||||||
ShowStatisticsResponse |
showStatistics(List<String> tableNames,
Map<String,String> options)
Retrieves the collected column statistics for the specified table(s).
|
||||||||||||||||||||||||
ShowStatisticsResponse |
showStatistics(ShowStatisticsRequest request)
Retrieves the collected column statistics for the specified table(s).
|
||||||||||||||||||||||||
ShowSystemPropertiesResponse |
showSystemProperties(Map<String,String> options)
Returns server configuration and version related information to the
caller.
|
||||||||||||||||||||||||
ShowSystemPropertiesResponse |
showSystemProperties(ShowSystemPropertiesRequest request)
Returns server configuration and version related information to the
caller.
|
||||||||||||||||||||||||
ShowSystemStatusResponse |
showSystemStatus(Map<String,String> options)
Provides server configuration and health related status to the caller.
|
||||||||||||||||||||||||
ShowSystemStatusResponse |
showSystemStatus(ShowSystemStatusRequest request)
Provides server configuration and health related status to the caller.
|
||||||||||||||||||||||||
ShowSystemTimingResponse |
showSystemTiming(Map<String,String> options)
Returns the last 100 database requests along with the request timing and
internal job id.
|
||||||||||||||||||||||||
ShowSystemTimingResponse |
showSystemTiming(ShowSystemTimingRequest request)
Returns the last 100 database requests along with the request timing and
internal job id.
|
||||||||||||||||||||||||
ShowTableResponse |
showTable(ShowTableRequest request)
Retrieves detailed information about a table, view, or schema, specified
in
tableName . |
||||||||||||||||||||||||
ShowTableResponse |
showTable(String tableName,
Map<String,String> options)
Retrieves detailed information about a table, view, or schema, specified
in
tableName . |
||||||||||||||||||||||||
ShowTableMetadataResponse |
showTableMetadata(List<String> tableNames,
Map<String,String> options)
Retrieves the user provided metadata for the specified tables.
|
||||||||||||||||||||||||
ShowTableMetadataResponse |
showTableMetadata(ShowTableMetadataRequest request)
Retrieves the user provided metadata for the specified tables.
|
||||||||||||||||||||||||
ShowTableMonitorsResponse |
showTableMonitors(List<String> monitorIds,
Map<String,String> options)
Show table monitors and their properties.
|
||||||||||||||||||||||||
ShowTableMonitorsResponse |
showTableMonitors(ShowTableMonitorsRequest request)
Show table monitors and their properties.
|
||||||||||||||||||||||||
ShowTablesByTypeResponse |
showTablesByType(ShowTablesByTypeRequest request)
Gets names of the tables whose type matches the given criteria.
|
||||||||||||||||||||||||
ShowTablesByTypeResponse |
showTablesByType(String typeId,
String label,
Map<String,String> options)
Gets names of the tables whose type matches the given criteria.
|
||||||||||||||||||||||||
ShowTriggersResponse |
showTriggers(List<String> triggerIds,
Map<String,String> options)
Retrieves information regarding the specified triggers or all existing
triggers currently active.
|
||||||||||||||||||||||||
ShowTriggersResponse |
showTriggers(ShowTriggersRequest request)
Retrieves information regarding the specified triggers or all existing
triggers currently active.
|
||||||||||||||||||||||||
ShowTypesResponse |
showTypes(ShowTypesRequest request)
Retrieves information for the specified data type ID or type label.
|
||||||||||||||||||||||||
ShowTypesResponse |
showTypes(String typeId,
String label,
Map<String,String> options)
Retrieves information for the specified data type ID or type label.
|
||||||||||||||||||||||||
ShowVideoResponse |
showVideo(List<String> paths,
Map<String,String> options)
Retrieves information about rendered videos.
|
||||||||||||||||||||||||
ShowVideoResponse |
showVideo(ShowVideoRequest request)
Retrieves information about rendered videos.
|
||||||||||||||||||||||||
ShowWalResponse |
showWal(List<String> tableNames,
Map<String,String> options)
Requests table wal properties.
|
||||||||||||||||||||||||
ShowWalResponse |
showWal(ShowWalRequest request)
Requests table wal properties.
|
||||||||||||||||||||||||
SolveGraphResponse |
solveGraph(SolveGraphRequest request)
Solves an existing graph for a type of problem (e.g., shortest path,
page rank, travelling salesman, etc.) using source nodes, destination
nodes, and additional, optional weights and restrictions.
|
||||||||||||||||||||||||
SolveGraphResponse |
solveGraph(String graphName,
List<String> weightsOnEdges,
List<String> restrictions,
String solverType,
List<String> sourceNodes,
List<String> destinationNodes,
String solutionTable,
Map<String,String> options)
Solves an existing graph for a type of problem (e.g., shortest path,
page rank, travelling salesman, etc.) using source nodes, destination
nodes, and additional, optional weights and restrictions.
|
||||||||||||||||||||||||
<TRequest> UpdateRecordsResponse |
updateRecords(String tableName,
List<String> expressions,
List<Map<String,String>> newValuesMaps,
List<TRequest> data,
Map<String,String> options)
Runs multiple predicate-based updates in a single call.
|
||||||||||||||||||||||||
<TRequest> UpdateRecordsResponse |
updateRecords(TypeObjectMap<TRequest> typeObjectMap,
String tableName,
List<String> expressions,
List<Map<String,String>> newValuesMaps,
List<TRequest> data,
Map<String,String> options)
Runs multiple predicate-based updates in a single call.
|
||||||||||||||||||||||||
<TRequest> UpdateRecordsResponse |
updateRecords(TypeObjectMap<TRequest> typeObjectMap,
UpdateRecordsRequest<TRequest> request)
Runs multiple predicate-based updates in a single call.
|
||||||||||||||||||||||||
<TRequest> UpdateRecordsResponse |
updateRecords(UpdateRecordsRequest<TRequest> request)
Runs multiple predicate-based updates in a single call.
|
||||||||||||||||||||||||
UpdateRecordsBySeriesResponse |
updateRecordsBySeries(String tableName,
String worldTableName,
String viewName,
List<String> reserved,
Map<String,String> options)
Updates the view specified by
tableName to include full series
(track) information from the worldTableName for the series
(tracks) present in the viewName . |
||||||||||||||||||||||||
UpdateRecordsBySeriesResponse |
updateRecordsBySeries(UpdateRecordsBySeriesRequest request)
Updates the view specified by
tableName to include full series (track) information from the worldTableName for the series (tracks) present in the viewName . |
||||||||||||||||||||||||
UpdateRecordsResponse |
updateRecordsRaw(RawUpdateRecordsRequest request)
Runs multiple predicate-based updates in a single call.
|
||||||||||||||||||||||||
UploadFilesResponse |
uploadFiles(List<String> fileNames,
List<ByteBuffer> fileData,
Map<String,String> options)
Uploads one or more files to
UploadFilesResponse uploadFiles(UploadFilesRequest request)
Uploads one or more files to
UploadFilesFromurlResponse uploadFilesFromurl(List<String> fileNames,
List<String> urls,
Map<String,String> options)
Uploads one or more files to
UploadFilesFromurlResponse uploadFilesFromurl(UploadFilesFromurlRequest request)
Uploads one or more files to
VisualizeGetFeatureInfoResponse visualizeGetFeatureInfo(List<String> tableNames,
List<String> xColumnNames,
List<String> yColumnNames,
List<String> geometryColumnNames,
List<List<String>> queryColumnNames,
String projection,
double minX,
double maxX,
double minY,
double maxY,
int width,
int height,
int x,
int y,
int radius,
long limit,
String encoding,
Map<String,String> options) | ||||||||||||||||||||||||
VisualizeGetFeatureInfoResponse |
visualizeGetFeatureInfo(VisualizeGetFeatureInfoRequest request) |
||||||||||||||||||||||||
VisualizeImageResponse |
visualizeImage(List<String> tableNames,
List<String> worldTableNames,
String xColumnName,
String yColumnName,
String symbolColumnName,
String geometryColumnName,
List<List<String>> trackIds,
double minX,
double maxX,
double minY,
double maxY,
int width,
int height,
String projection,
long bgColor,
Map<String,List<String>> styleOptions,
Map<String,String> options) |
||||||||||||||||||||||||
VisualizeImageResponse |
visualizeImage(VisualizeImageRequest request) |
||||||||||||||||||||||||
VisualizeImageChartResponse |
visualizeImageChart(String tableName,
List<String> xColumnNames,
List<String> yColumnNames,
double minX,
double maxX,
double minY,
double maxY,
int width,
int height,
String bgColor,
Map<String,List<String>> styleOptions,
Map<String,String> options)
Scatter plot is the only plot type currently supported.
|
||||||||||||||||||||||||
VisualizeImageChartResponse |
visualizeImageChart(VisualizeImageChartRequest request)
Scatter plot is the only plot type currently supported.
|
||||||||||||||||||||||||
VisualizeImageClassbreakResponse |
visualizeImageClassbreak(List<String> tableNames,
List<String> worldTableNames,
String xColumnName,
String yColumnName,
String symbolColumnName,
String geometryColumnName,
List<List<String>> trackIds,
String cbAttr,
List<String> cbVals,
String cbPointcolorAttr,
List<String> cbPointcolorVals,
String cbPointalphaAttr,
List<String> cbPointalphaVals,
String cbPointsizeAttr,
List<String> cbPointsizeVals,
String cbPointshapeAttr,
List<String> cbPointshapeVals,
double minX,
double maxX,
double minY,
double maxY,
int width,
int height,
String projection,
long bgColor,
Map<String,List<String>> styleOptions,
Map<String,String> options,
List<Integer> cbTransparencyVec) |
||||||||||||||||||||||||
VisualizeImageClassbreakResponse |
visualizeImageClassbreak(VisualizeImageClassbreakRequest request) |
||||||||||||||||||||||||
VisualizeImageContourResponse |
visualizeImageContour(List<String> tableNames,
String xColumnName,
String yColumnName,
String valueColumnName,
double minX,
double maxX,
double minY,
double maxY,
int width,
int height,
String projection,
Map<String,String> styleOptions,
Map<String,String> options) |
||||||||||||||||||||||||
VisualizeImageContourResponse |
visualizeImageContour(VisualizeImageContourRequest request) |
||||||||||||||||||||||||
VisualizeImageHeatmapResponse |
visualizeImageHeatmap(List<String> tableNames,
String xColumnName,
String yColumnName,
String valueColumnName,
String geometryColumnName,
double minX,
double maxX,
double minY,
double maxY,
int width,
int height,
String projection,
Map<String,String> styleOptions,
Map<String,String> options) |
||||||||||||||||||||||||
VisualizeImageHeatmapResponse |
visualizeImageHeatmap(VisualizeImageHeatmapRequest request) |
||||||||||||||||||||||||
VisualizeImageLabelsResponse |
visualizeImageLabels(String tableName,
String xColumnName,
String yColumnName,
String xOffset,
String yOffset,
String textString,
String font,
String textColor,
String textAngle,
String textScale,
String drawBox,
String drawLeader,
String lineWidth,
String lineColor,
String fillColor,
String leaderXColumnName,
String leaderYColumnName,
String filter,
double minX,
double maxX,
double minY,
double maxY,
int width,
int height,
String projection,
Map<String,String> options) |
||||||||||||||||||||||||
VisualizeImageLabelsResponse |
visualizeImageLabels(VisualizeImageLabelsRequest request) |
||||||||||||||||||||||||
VisualizeIsochroneResponse |
visualizeIsochrone(String graphName,
String sourceNode,
double maxSolutionRadius,
List<String> weightsOnEdges,
List<String> restrictions,
int numLevels,
boolean generateImage,
String levelsTable,
Map<String,String> styleOptions,
Map<String,String> solveOptions,
Map<String,String> contourOptions,
Map<String,String> options)
Generate an image containing isolines for travel results using an
existing graph.
|
||||||||||||||||||||||||
VisualizeIsochroneResponse |
visualizeIsochrone(VisualizeIsochroneRequest request)
Generate an image containing isolines for travel results using an
existing graph.
|
addHttpHeader, addKnownType, addKnownType, addKnownTypeFromTable, addKnownTypeFromTable, addKnownTypeObjectMap, createAuthorizationHeader, createHASyncModeHeader, decode, decode, decode, decodeMultiple, decodeMultiple, encode, encode, execute, execute, execute, finalize, getApiVersion, getAuthorizationFromHttpHeaders, getClusterInfo, getExecutor, getFailoverURLs, getHARingInfo, getHARingSize, getHASyncMode, getHmURL, getHmURLs, getHostAddresses, getHttpHeaders, getNumClusterSwitches, getPassword, getPrimaryHostname, getPrimaryUrl, getRecordsJson, getRecordsJson, getRecordsJson, getRecordsJson, getRecordsJson, getRecordsJson, getServerVersion, getSystemProperties, getThreadCount, getTimeout, getTypeDescriptor, getTypeObjectMap, getURL, getURLs, getUsername, getUseSnappy, incrementNumClusterSwitches, initializeHttpConnection, initializeHttpConnection, initializeHttpPostRequest, initializeHttpPostRequest, insertRecordsFromJson, insertRecordsFromJson, insertRecordsFromJson, isAutoDiscoveryEnabled, isKineticaRunning, list, options, ping, ping, ping, query, query, query, removeHttpHeader, removeProtectedHttpHeaders, selectNextCluster, setHASyncMode, setHostManagerPort, setTypeDescriptorIfMissing, submitRequest, submitRequest, submitRequest, submitRequest, submitRequest, submitRequest, submitRequest, submitRequest, submitRequestRaw, submitRequestRaw, submitRequestRaw, submitRequestRaw, submitRequestRaw, submitRequestRaw, submitRequestToHM, submitRequestToHM, switchURL
public GPUdb(String url) throws GPUdbException
GPUdb
instance for the GPUdb server at the
specified URL using default options. Note that these options
cannot be changed subsequently; to use different options, a new
GPUdb
instance must be created.url
- The URL of the GPUdb server. Can be a comma-separated
string containing multiple full URLs, or a single URL.
For example 'http://172.42.40.1:9191,,http://172.42.40.2:9191'.
If a single URL is given, the given URL will be used as
the primary URL.GPUdbException
- if an error occurs during creation.public GPUdb(URL url) throws GPUdbException
GPUdb
instance for the GPUdb server at the
specified URL using default options. Note that these options
cannot be changed subsequently; to use different options, a new
GPUdb
instance must be created.url
- The URL of the GPUdb server. The given URL will be used as
the primary URL.GPUdbException
- if an error occurs during creation.public GPUdb(List<URL> urls) throws GPUdbException
GPUdb
instance for the GPUdb server with the
specified URLs using default options. At any given time, one
URL (initially selected at random from the list) will be active
and used for all GPUdb calls, but in the event of failure, the
other URLs will be tried in order, and if a working one is found
it will become the new active URL. Note that the default options
cannot be changed subsequently; to use different options, a new
GPUdb
instance must be created.urls
- The URLs of the GPUdb server. If a single URL is given,
it will be used as the primary URL.GPUdbException
- if an error occurs during creation.public GPUdb(String url, GPUdbBase.Options options) throws GPUdbException
GPUdb
instance for the GPUdb server at the
specified URL using the specified options. Note that these
options cannot be changed subsequently; to use different options,
a new GPUdb
instance must be created.url
- The URL of the GPUdb server. Can be a comma-separated
string containing multiple full URLs, or a single URL.
For example 'http://172.42.40.1:9191,,http://172.42.40.2:9191'.
If a single URL is given, and no primary URL is specified via
the options, the given URL will be used as the primary URL.options
- The options, e.g. primary cluster URL, to use.GPUdbException
- if an error occurs during creation.GPUdbBase.Options
public GPUdb(URL url, GPUdbBase.Options options) throws GPUdbException
GPUdb
instance for the GPUdb server at the
specified URL using the specified options. Note that these
options cannot be changed subsequently; to use different options,
a new GPUdb
instance must be created.url
- The URL of the GPUdb server. If no primary URL is specified via
the options, the given URL will be used as the primary URL.options
- The options, e.g. primary cluster URL, to use.GPUdbException
- if an error occurs during creation.GPUdbBase.Options
public GPUdb(List<URL> urls, GPUdbBase.Options options) throws GPUdbException
GPUdb
instance for the GPUdb server with the
specified URLs using the specified options. At any given time,
one URL (initially selected at random from the list) will be active
and used for all GPUdb calls, but in the event of failure, the
other URLs will be tried in order, and if a working one is found
it will become the new active URL. Note that the specified options
cannot be changed subsequently; to use different options, a new
GPUdb
instance must be created.urls
- The URLs of the GPUdb server. If a single URL is given,
and no primary URL is specified via the options, the given
URL will be used as the primary URL.options
- The options, e.g. primary cluster URL, to use.GPUdbException
- if an error occurs during creation.GPUdbBase.Options
public AdminAddHostResponse adminAddHost(AdminAddHostRequest request) throws GPUdbException
Note: This method should be used for on-premise deployments only.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminAddHostResponse adminAddHost(String hostAddress, Map<String,String> options) throws GPUdbException
Note: This method should be used for on-premise deployments only.
hostAddress
- IP address of the host that will be added to the
cluster. This host must have installed the same
version of Kinetica as the cluster to which it is
being added.options
- Optional parameters.
DRY_RUN
: If set to TRUE
, only validation checks will be performed.
No host is added.
Supported values:
The default value is FALSE
.
ACCEPTS_FAILOVER
: If set to TRUE
, the host will accept processes (ranks,
graph server, etc.) in the event of a failover
on another node in the cluster.
Supported values:
The default value is FALSE
.
PUBLIC_ADDRESS
: The publicly-accessible IP
address for the host being added, typically
specified for clients using multi-head
operations. This setting is required if any
other host(s) in the cluster specify a public
address.
HOST_MANAGER_PUBLIC_URL
: The
publicly-accessible full path URL to the host
manager on the host being added, e.g.,
'http://172.123.45.67:9300'. The default host
manager port can be found in the list of ports used by
Kinetica.
RAM_LIMIT
: The desired RAM limit for the host
being added, i.e. the sum of RAM usage for all
processes on the host will not be able to exceed
this value. Supported units: K (thousand), KB
(kilobytes), M (million), MB (megabytes), G
(billion), GB (gigabytes); if no unit is
provided, the value is assumed to be in bytes.
For example, if RAM_LIMIT
is set to 10M, the resulting RAM
limit is 10 million bytes. Set RAM_LIMIT
to -1 to have no RAM limit.
GPUS
: Comma-delimited list of GPU indices
(starting at 1) that are eligible for running
worker processes. If left blank, all GPUs on the
host being added will be eligible.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminAddRanksResponse adminAddRanks(AdminAddRanksRequest request) throws GPUdbException
adminRebalance
.
The database must be offline for this operation, see adminOffline
For example, if attempting to add three new ranks (two ranks on host 172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with additional configuration parameters:
* hosts
would
be an array including 172.123.45.67 in the first two indices (signifying
two ranks being added to host 172.123.45.67) and 172.123.45.68 in the
last index (signifying one rank being added to host 172.123.45.67)
* configParams
would be an array of maps, with each map corresponding to
the ranks being added in hosts
. The key of
each map would be the configuration parameter name and the value would
be the parameter's value, e.g. '{"rank.gpu":"1"}'
This endpoint's processing includes copying all replicated table data to
the new rank(s) and therefore could take a long time. The API call may
time out if run directly. It is recommended to run this endpoint
asynchronously via createJob
.
Note: This method should be used for on-premise deployments only.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminAddRanksResponse adminAddRanks(List<String> hosts, List<Map<String,String>> configParams, Map<String,String> options) throws GPUdbException
adminRebalance
.
The database must be offline for this operation, see adminOffline
For example, if attempting to add three new ranks (two ranks on host 172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with additional configuration parameters:
* hosts
would be an array including 172.123.45.67 in the first
two indices (signifying two ranks being added to host 172.123.45.67) and
172.123.45.68 in the last index (signifying one rank being added to host
172.123.45.67)
* configParams
would be an array of maps, with each map
corresponding to the ranks being added in hosts
. The key of each
map would be the configuration parameter name and the value would be the
parameter's value, e.g. '{"rank.gpu":"1"}'
This endpoint's processing includes copying all replicated table data to
the new rank(s) and therefore could take a long time. The API call may
time out if run directly. It is recommended to run this endpoint
asynchronously via createJob
.
Note: This method should be used for on-premise deployments only.
hosts
- Array of host IP addresses (matching a hostN.address from
the gpudb.conf file), or host identifiers (e.g. 'host0'
from the gpudb.conf file), on which to add ranks to the
cluster. The hosts must already be in the cluster. If
needed beforehand, to add a new host to the cluster use
adminAddHost
.
Include the same entry as many times as there are ranks to
add to the cluster, e.g., if two ranks on host
172.123.45.67 should be added, hosts
could look
like '["172.123.45.67", "172.123.45.67"]'. All ranks will
be added simultaneously, i.e. they're not added in the
order of this array. Each entry in this array corresponds
to the entry at the same index in the configParams
.configParams
- Array of maps containing configuration parameters
to apply to the new ranks found in hosts
.
For example, '{"rank.gpu":"2",
"tier.ram.rank.limit":"10000000000"}'. Currently,
the available parameters are rank-specific
parameters in the Network, Hardware, Text Search, and RAM Tiered Storage sections in
the gpudb.conf file, with the key exception of the
'rankN.host' settings in the Network section that
will be determined by hosts
instead. Though
many of these configuration parameters typically
are affixed with 'rankN' in the gpudb.conf file
(where N is the rank number), the 'N' should be
omitted in configParams
as the new rank
number(s) are not allocated until the ranks have
been added to the cluster. Each entry in this array
corresponds to the entry at the same index in the
hosts
. This array must either be completely
empty or have the same number of elements as the
hosts
. An empty configParams
array
will result in the new ranks being set with default
parameters.options
- Optional parameters.
DRY_RUN
: If TRUE
, only validation checks will be performed.
No ranks are added.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminAlterHostResponse adminAlterHost(AdminAlterHostRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminAlterHostResponse adminAlterHost(String host, Map<String,String> options) throws GPUdbException
host
- Identifies the host this applies to. Can be the host
address, or formatted as 'hostN' where N is the host number
as specified in gpudb.confoptions
- Optional parameters.
ACCEPTS_FAILOVER
: If set to TRUE
, the host will accept processes (ranks,
graph server, etc.) in the event of a failover
on another node in the cluster.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminAlterJobsResponse adminAlterJobs(AdminAlterJobsRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminAlterJobsResponse adminAlterJobs(List<Long> jobIds, String action, Map<String,String> options) throws GPUdbException
jobIds
- Jobs to be modified.action
- Action to be performed on the jobs specified by job_ids.
Supported values:
options
- Optional parameters.
JOB_TAG
: Job tag returned in call to create the
job
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminBackupBeginResponse adminBackupBegin(AdminBackupBeginRequest request) throws GPUdbException
adminBackupEnd
.request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminBackupBeginResponse adminBackupBegin(Map<String,String> options) throws GPUdbException
adminBackupEnd
.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminBackupEndResponse adminBackupEnd(AdminBackupEndRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminBackupEndResponse adminBackupEnd(Map<String,String> options) throws GPUdbException
options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminHaRefreshResponse adminHaRefresh(AdminHaRefreshRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminHaRefreshResponse adminHaRefresh(Map<String,String> options) throws GPUdbException
options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminOfflineResponse adminOffline(AdminOfflineRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminOfflineResponse adminOffline(boolean offline, Map<String,String> options) throws GPUdbException
offline
- Set to true if desired state is offline.
Supported values:
true
false
options
- Optional parameters.
FLUSH_TO_DISK
: Flush to disk when going
offline.
Supported values:
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminRebalanceResponse adminRebalance(AdminRebalanceRequest request) throws GPUdbException
The database must be offline for this operation, see adminOffline
* If adminRebalance
is invoked after a change is made to the cluster, e.g., a host was added
or removed, sharded data will be evenly redistributed across the
cluster by number of shards per rank while unsharded data will be
redistributed across the cluster by data size per rank
* If adminRebalance
is invoked at some point when unsharded data (a.k.a. randomly-sharded) in the cluster is unevenly
distributed over time, sharded data will not move while unsharded data
will be redistributed across the cluster by data size per rank
NOTE: Replicated data will not move as a result of this call
This endpoint's processing time depends on the amount of data in the
system, thus the API call may time out if run directly. It is
recommended to run this endpoint asynchronously via createJob
.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminRebalanceResponse adminRebalance(Map<String,String> options) throws GPUdbException
The database must be offline for this operation, see adminOffline
* If adminRebalance
is invoked after a
change is made to the cluster, e.g., a host was added or removed, sharded
data will be evenly redistributed across the cluster by number of
shards per rank while unsharded data will be redistributed across the
cluster by data size per rank
* If adminRebalance
is invoked at some
point when unsharded data (a.k.a. randomly-sharded) in the cluster is unevenly
distributed over time, sharded data will not move while unsharded data
will be redistributed across the cluster by data size per rank
NOTE: Replicated data will not move as a result of this call
This endpoint's processing time depends on the amount of data in the
system, thus the API call may time out if run directly. It is
recommended to run this endpoint asynchronously via createJob
.
options
- Optional parameters.
REBALANCE_SHARDED_DATA
: If TRUE
, sharded data will be
rebalanced approximately equally across the
cluster. Note that for clusters with large
amounts of sharded data, this data transfer
could be time consuming and result in delayed
query responses.
Supported values:
The default value is TRUE
.
REBALANCE_UNSHARDED_DATA
: If TRUE
, unsharded data (a.k.a. randomly-sharded) will be
rebalanced approximately equally across the
cluster. Note that for clusters with large
amounts of unsharded data, this data transfer
could be time consuming and result in delayed
query responses.
Supported values:
The default value is TRUE
.
TABLE_INCLUDES
: Comma-separated list of
unsharded table names to rebalance. Not
applicable to sharded tables because they are
always rebalanced. Cannot be used simultaneously
with TABLE_EXCLUDES
. This parameter is ignored if
REBALANCE_UNSHARDED_DATA
is FALSE
.
TABLE_EXCLUDES
: Comma-separated list of
unsharded table names to not rebalance. Not
applicable to sharded tables because they are
always rebalanced. Cannot be used simultaneously
with TABLE_INCLUDES
. This parameter is ignored if
REBALANCE_UNSHARDED_DATA
is FALSE
.
AGGRESSIVENESS
: Influences how much data is
moved at a time during rebalance. A higher
AGGRESSIVENESS
will complete the rebalance
faster. A lower AGGRESSIVENESS
will take longer but allow for
better interleaving between the rebalance and
other queries. Valid values are constants from 1
(lowest) to 10 (highest). The default value is
'10'.
COMPACT_AFTER_REBALANCE
: Perform compaction of
deleted records once the rebalance completes to
reclaim memory and disk space. Default is TRUE
, unless REPAIR_INCORRECTLY_SHARDED_DATA
is set to
TRUE
.
Supported values:
The default value is TRUE
.
COMPACT_ONLY
: If set to TRUE
, ignore rebalance options and attempt to
perform compaction of deleted records to reclaim
memory and disk space without rebalancing first.
Supported values:
The default value is FALSE
.
REPAIR_INCORRECTLY_SHARDED_DATA
: Scans for any
data sharded incorrectly and re-routes the data
to the correct location. Only necessary if
adminVerifyDb
reports an error in sharding alignment. This can
be done as part of a typical rebalance after
expanding the cluster or in a standalone fashion
when it is believed that data is sharded
incorrectly somewhere in the cluster. Compaction
will not be performed by default when this is
enabled. If this option is set to TRUE
, the time necessary to rebalance and the
memory used by the rebalance may increase.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminRemoveHostResponse adminRemoveHost(AdminRemoveHostRequest request) throws GPUdbException
adminRemoveRanks
or
manually switched over to a new host using adminSwitchover
prior to
host removal. If the host to be removed has the graph server or SQL
planner running on it, these must be manually switched over to a new
host using adminSwitchover
.
Note: This method should be used for on-premise deployments only.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminRemoveHostResponse adminRemoveHost(String host, Map<String,String> options) throws GPUdbException
adminRemoveRanks
or manually switched
over to a new host using adminSwitchover
prior to host removal. If the host to be removed has
the graph server or SQL planner running on it, these must be manually
switched over to a new host using adminSwitchover
.
Note: This method should be used for on-premise deployments only.
host
- Identifies the host this applies to. Can be the host
address, or formatted as 'hostN' where N is the host number
as specified in gpudb.confoptions
- Optional parameters.
DRY_RUN
: If set to TRUE
, only validation checks will be performed.
No host is removed.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminRemoveRanksResponse adminRemoveRanks(AdminRemoveRanksRequest request) throws GPUdbException
REBALANCE_SHARDED_DATA
or REBALANCE_UNSHARDED_DATA
parameters are set to FALSE
in the
options
,
in which case the corresponding sharded
data and/or unsharded data (a.k.a. randomly-sharded) will be deleted.
The database must be offline for this operation, see adminOffline
This endpoint's processing time depends on the amount of data in the
system, thus the API call may time out if run directly. It is
recommended to run this endpoint asynchronously via createJob
.
Note: This method should be used for on-premise deployments only.
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminRemoveRanksResponse adminRemoveRanks(List<String> ranks, Map<String,String> options) throws GPUdbException
REBALANCE_SHARDED_DATA
or REBALANCE_UNSHARDED_DATA
parameters are set to FALSE
in the
options
, in which case the corresponding sharded
data and/or unsharded data (a.k.a. randomly-sharded) will be deleted.
The database must be offline for this operation, see adminOffline
This endpoint's processing time depends on the amount of data in the
system, thus the API call may time out if run directly. It is
recommended to run this endpoint asynchronously via createJob
.
Note: This method should be used for on-premise deployments only.
ranks
- Each array value designates one or more ranks to remove
from the cluster. Values can be formatted as 'rankN' for a
specific rank, 'hostN' (from the gpudb.conf file) to
remove all ranks on that host, or the host IP address
(hostN.address from the gpub.conf file) which also removes
all ranks on that host. Rank 0 (the head rank) cannot be
removed (but can be moved to another host using adminSwitchover
).
At least one worker rank must be left in the cluster after
the operation.options
- Optional parameters.
REBALANCE_SHARDED_DATA
: If TRUE
, sharded data will be
rebalanced approximately equally across the
cluster. Note that for clusters with large
amounts of sharded data, this data transfer
could be time consuming and result in delayed
query responses.
Supported values:
The default value is TRUE
.
REBALANCE_UNSHARDED_DATA
: If TRUE
, unsharded data (a.k.a. randomly-sharded) will be
rebalanced approximately equally across the
cluster. Note that for clusters with large
amounts of unsharded data, this data transfer
could be time consuming and result in delayed
query responses.
Supported values:
The default value is TRUE
.
AGGRESSIVENESS
: Influences how much data is
moved at a time during rebalance. A higher
AGGRESSIVENESS
will complete the rebalance
faster. A lower AGGRESSIVENESS
will take longer but allow for
better interleaving between the rebalance and
other queries. Valid values are constants from 1
(lowest) to 10 (highest). The default value is
'10'.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminRepairTableResponse adminRepairTable(AdminRepairTableRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminRepairTableResponse adminRepairTable(List<String> tableNames, Map<String,String> options) throws GPUdbException
tableNames
- List of tables to query. An asterisk returns all
tables.options
- Optional parameters.
REPAIR_POLICY
: Corrective action to take.
Supported values:
DELETE_CHUNKS
: Deletes any corrupted
chunks
SHRINK_COLUMNS
: Shrinks corrupted
chunks to the shortest column
REPLAY_WAL
: Manually invokes wal replay
on the table
VERIFY_ALL
: If FALSE
only table chunk data already known to be
corrupted will be repaired. Otherwise the
database will perform a full table scan to check
for correctness.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminShowAlertsResponse adminShowAlerts(AdminShowAlertsRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminShowAlertsResponse adminShowAlerts(int numAlerts, Map<String,String> options) throws GPUdbException
numAlerts
- Number of most recent alerts to request. The response
will include up to numAlerts
depending on how
many alerts there are in the system. A value of 0
returns all stored alerts.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminShowClusterOperationsResponse adminShowClusterOperations(AdminShowClusterOperationsRequest request) throws GPUdbException
historyIndex
.
Returns details on the requested cluster operation.
The response will also indicate how many cluster operations are stored in the history.
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public AdminShowClusterOperationsResponse adminShowClusterOperations(int historyIndex, Map<String,String> options) throws GPUdbException
historyIndex
.
Returns details on the requested cluster operation.
The response will also indicate how many cluster operations are stored in the history.
historyIndex
- Indicates which cluster operation to retrieve. Use
0 for the most recent. The default value is 0.options
- Optional parameters. The default value is an empty
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public AdminShowJobsResponse adminShowJobs(AdminShowJobsRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminShowJobsResponse adminShowJobs(Map<String,String> options) throws GPUdbException
options
- Optional parameters.
SHOW_ASYNC_JOBS
: If TRUE
, then the completed async jobs are also
included in the response. By default, once the
async jobs are completed they are no longer
included in the jobs list.
Supported values:
The default value is FALSE
.
SHOW_WORKER_INFO
: If TRUE
, then information is also returned from
worker ranks. By default only status from the
head rank is returned.
Supported values:
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminShowShardsResponse adminShowShards(AdminShowShardsRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminShowShardsResponse adminShowShards(Map<String,String> options) throws GPUdbException
options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminShutdownResponse adminShutdown(AdminShutdownRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminShutdownResponse adminShutdown(String exitType, String authorization, Map<String,String> options) throws GPUdbException
exitType
- Reserved for future use. User can pass an empty string.authorization
- No longer used. User can pass an empty string.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminSwitchoverResponse adminSwitchover(AdminSwitchoverRequest request) throws GPUdbException
Note: This method should be used for on-premise deployments only.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminSwitchoverResponse adminSwitchover(List<String> processes, List<String> destinations, Map<String,String> options) throws GPUdbException
Note: This method should be used for on-premise deployments only.
processes
- Indicates the process identifier to switch over to
another host. Options are 'hostN' and 'rankN' where
'N' corresponds to the number associated with a host
or rank in the Network section of the gpudb.conf
file; e.g., 'host[N].address' or 'rank[N].host'. If
'hostN' is provided, all processes on that host will
be moved to another host. Each entry in this array
will be switched over to the corresponding host entry
at the same index in destinations
.destinations
- Indicates to which host to switch over each
corresponding process given in processes
.
Each index must be specified as 'hostN' where 'N'
corresponds to the number associated with a host or
rank in the Network section of the gpudb.conf
file; e.g., 'host[N].address'. Each entry in this
array will receive the corresponding process entry
at the same index in processes
.options
- Optional parameters.
DRY_RUN
: If set to TRUE
, only validation checks will be performed.
Nothing is switched over.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminVerifyDbResponse adminVerifyDb(AdminVerifyDbRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AdminVerifyDbResponse adminVerifyDb(Map<String,String> options) throws GPUdbException
options
- Optional parameters.
REBUILD_ON_ERROR
: [DEPRECATED -- Use the
Rebuild DB feature of GAdmin instead.].
Supported values:
The default value is FALSE
.
VERIFY_NULLS
: When TRUE
, verifies that null values are set to
zero.
Supported values:
The default value is FALSE
.
VERIFY_PERSIST
: When TRUE
, persistent objects will be compared
against their state in memory and workers will
be checked for orphaned table data in persist.
To check for orphaned worker data, either set
CONCURRENT_SAFE
in options
to TRUE
or place the database offline.
Supported values:
The default value is FALSE
.
CONCURRENT_SAFE
: When TRUE
, allows this endpoint to be run safely
with other concurrent database operations. Other
operations may be slower while this is running.
Supported values:
The default value is TRUE
.
VERIFY_RANK0
: If TRUE
, compare rank0 table metadata against
workers' metadata.
Supported values:
The default value is FALSE
.
DELETE_ORPHANED_TABLES
: If TRUE
, orphaned table directories found on
workers for which there is no corresponding
metadata will be deleted. Must set VERIFY_PERSIST
in options
to TRUE
. It is recommended to run this while the
database is offline OR set CONCURRENT_SAFE
in options
to TRUE
.
Supported values:
The default value is FALSE
.
VERIFY_ORPHANED_TABLES_ONLY
: If TRUE
, only the presence of orphaned table
directories will be checked, all persistence
checks will be skipped.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateConvexHullResponse aggregateConvexHull(AggregateConvexHullRequest request) throws GPUdbException
tableName
.request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateConvexHullResponse aggregateConvexHull(String tableName, String xColumnName, String yColumnName, Map<String,String> options) throws GPUdbException
tableName
.tableName
- Name of table on which the operation will be
performed. Must be an existing table, in
[schema_name.]table_name format, using standard name resolution rules.xColumnName
- Name of the column containing the x coordinates of
the points for the operation being performed.yColumnName
- Name of the column containing the y coordinates of
the points for the operation being performed.options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public RawAggregateGroupByResponse aggregateGroupByRaw(AggregateGroupByRequest request) throws GPUdbException
For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.
The results can be paged via the offset
and
limit
parameters. For example, to get 10 groups with the largest counts the
inputs would be: limit=10, options={"sort_order":"descending",
"sort_by":"value"}.
options
can be used to customize behavior of this call e.g. filtering or
sorting the results.
To group by columns 'x' and 'y' and compute the number of objects within each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use: column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to HAVING
.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a RESULT_TABLE
name is specified in the options
, the
results are stored in a new table with that name--no results are
returned in the response. Both the table name and resulting column
names must adhere to standard naming conventions; column/aggregation
expressions will need to be aliased. If the source table's shard
key is used as the grouping column(s) and all result records are
selected (offset
is 0 and limit
is -9999),
the result table will be sharded, in all other cases it will be
replicated. Sorting will properly function only if the result table is
replicated or if there is only one processing node and should not be
relied upon in other cases. Not available when any of the values of
columnNames
is an unrestricted-length string.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateGroupByResponse aggregateGroupBy(AggregateGroupByRequest request) throws GPUdbException
For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.
The results can be paged via the offset
and
limit
parameters. For example, to get 10 groups with the largest counts the
inputs would be: limit=10, options={"sort_order":"descending",
"sort_by":"value"}.
options
can be used to customize behavior of this call e.g. filtering or
sorting the results.
To group by columns 'x' and 'y' and compute the number of objects within each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use: column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to HAVING
.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a RESULT_TABLE
name is specified in the options
, the
results are stored in a new table with that name--no results are
returned in the response. Both the table name and resulting column
names must adhere to standard naming conventions; column/aggregation
expressions will need to be aliased. If the source table's shard
key is used as the grouping column(s) and all result records are
selected (offset
is 0 and limit
is -9999),
the result table will be sharded, in all other cases it will be
replicated. Sorting will properly function only if the result table is
replicated or if there is only one processing node and should not be
relied upon in other cases. Not available when any of the values of
columnNames
is an unrestricted-length string.
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateGroupByResponse aggregateGroupBy(String tableName, List<String> columnNames, long offset, long limit, Map<String,String> options) throws GPUdbException
For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.
The results can be paged via the offset
and limit
parameters. For example, to get 10 groups with the largest counts the
inputs would be: limit=10, options={"sort_order":"descending",
"sort_by":"value"}.
options
can be used to customize behavior of this call
e.g. filtering or sorting the results.
To group by columns 'x' and 'y' and compute the number of objects within each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use: column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to HAVING
.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a RESULT_TABLE
name is specified in the options
, the results are
stored in a new table with that name--no results are returned in the
response. Both the table name and resulting column names must adhere to
standard
naming conventions; column/aggregation expressions will need to be
aliased. If the source table's shard
key is used as the grouping column(s) and all result records are
selected (offset
is 0 and limit
is -9999), the result
table will be sharded, in all other cases it will be replicated.
Sorting will properly function only if the result table is replicated or
if there is only one processing node and should not be relied upon in
other cases. Not available when any of the values of columnNames
is an unrestricted-length string.
tableName
- Name of an existing table or view on which the
operation will be performed, in
[schema_name.]table_name format, using standard name resolution rules.columnNames
- List of one or more column names, expressions, and
aggregate expressions.offset
- A positive integer indicating the number of initial
results to skip (this can be useful for paging through
the results). The default value is 0. The minimum allowed
value is 0. The maximum allowed value is MAX_INT.limit
- A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to indicate
that the maximum number of results allowed by the server
should be returned. The number of records returned will
never exceed the server's own limit, defined by the max_get_records_size parameter in the
server configuration. Use hasMoreRecords
to see if more records exist in the result
to be fetched, and offset
& limit
to
request subsequent pages of results. The default value is
-9999.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of RESULT_TABLE
. If RESULT_TABLE_PERSIST
is FALSE
(or unspecified), then this is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_RESULT_TABLE_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema as part of RESULT_TABLE
and use createSchema
to
create the schema if non-existent] Name of a
schema which is to contain the table specified
in RESULT_TABLE
. If the schema provided is
non-existent, it will be automatically created.
EXPRESSION
: Filter expression to apply to the
table prior to computing the aggregate group by.
CHUNKED_EXPRESSION_EVALUATION
: evaluate the
filter expression during group-by chunk
processing.
Supported values:
The default value is FALSE
.
HAVING
: Filter expression to apply to the
aggregated results.
SORT_ORDER
: [DEPRECATED--use order_by instead]
String indicating how the returned values should
be sorted - ascending or descending.
Supported values:
ASCENDING
: Indicates that the returned
values should be sorted in ascending
order.
DESCENDING
: Indicates that the returned
values should be sorted in descending
order.
ASCENDING
.
SORT_BY
: [DEPRECATED--use order_by instead]
String determining how the results are sorted.
Supported values:
KEY
: Indicates that the returned values
should be sorted by key, which
corresponds to the grouping columns. If
you have multiple grouping columns (and
are sorting by key), it will first sort
the first grouping column, then the
second grouping column, etc.
VALUE
: Indicates that the returned
values should be sorted by value, which
corresponds to the aggregates. If you
have multiple aggregates (and are
sorting by value), it will first sort by
the first aggregate, then the second
aggregate, etc.
VALUE
.
ORDER_BY
: Comma-separated list of the columns
to be sorted by as well as the sort direction,
e.g., 'timestamp asc, x desc'. The default value
is ''.
STRATEGY_DEFINITION
: The tier strategy for the table
and its columns.
RESULT_TABLE
: The name of a table used to store
the results, in [schema_name.]table_name format,
using standard name resolution rules and
meeting table naming criteria. Column
names (group-by and aggregate fields) need to be
given aliases e.g. ["FChar256 as fchar256",
"sum(FDouble) as sfd"]. If present, no results
are returned in the response. This option is
not available if one of the grouping attributes
is an unrestricted string (i.e.; not charN)
type.
RESULT_TABLE_PERSIST
: If TRUE
, then the result table specified in RESULT_TABLE
will be persisted and will not
expire unless a TTL
is specified. If FALSE
, then the result table will be an
in-memory table and will expire unless a TTL
is specified otherwise.
Supported values:
The default value is FALSE
.
RESULT_TABLE_FORCE_REPLICATED
: Force the result
table to be replicated (ignores any sharding).
Must be used in combination with the RESULT_TABLE
option.
Supported values:
The default value is FALSE
.
RESULT_TABLE_GENERATE_PK
: If TRUE
then set a primary key for the result
table. Must be used in combination with the
RESULT_TABLE
option.
Supported values:
The default value is FALSE
.
TTL
: Sets the TTL of the table specified in
RESULT_TABLE
.
CHUNK_SIZE
: Indicates the number of records per
chunk to be used for the result table. Must be
used in combination with the RESULT_TABLE
option.
CHUNK_COLUMN_MAX_MEMORY
: Indicates the target
maximum data size for each column in a chunk to
be used for the result table. Must be used in
combination with the RESULT_TABLE
option.
CHUNK_MAX_MEMORY
: Indicates the target maximum
data size for all columns in a chunk to be used
for the result table. Must be used in
combination with the RESULT_TABLE
option.
CREATE_INDEXES
: Comma-separated list of columns
on which to create indexes on the result table.
Must be used in combination with the RESULT_TABLE
option.
VIEW_ID
: ID of view of which the result table
will be a member. The default value is ''.
PIVOT
: pivot column
PIVOT_VALUES
: The value list provided will
become the column headers in the output. Should
be the values from the pivot_column.
GROUPING_SETS
: Customize the grouping attribute
sets to compute the aggregates. These sets can
include ROLLUP or CUBE operartors. The attribute
sets should be enclosed in paranthesis and can
include composite attributes. All attributes
specified in the grouping sets must present in
the groupby attributes.
ROLLUP
: This option is used to specify the
multilevel aggregates.
CUBE
: This option is used to specify the
multidimensional aggregates.
SHARD_KEY
: Comma-separated list of the columns
to be sharded on; e.g. 'column1, column2'. The
columns specified must be present in columnNames
. If any alias is given for any
column name, the alias must be used, rather than
the original column name. The default value is
''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateHistogramResponse aggregateHistogram(AggregateHistogramRequest request) throws GPUdbException
interval
is
used to produce bins of that size and the result, computed over the
records falling within each bin, is returned. For each bin, the start
value is inclusive, but the end value is exclusive--except for the very
last bin for which the end value is also inclusive. The value returned
for each bin is the number of records in it, except when a column name
is provided as a VALUE_COLUMN
. In this latter case the sum of the values corresponding
to the VALUE_COLUMN
is used as the result instead. The total number of bins
requested cannot exceed 10,000.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based) build to service a request that specifies a VALUE_COLUMN
.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateHistogramResponse aggregateHistogram(String tableName, String columnName, double start, double end, double interval, Map<String,String> options) throws GPUdbException
interval
is used to produce bins of that
size and the result, computed over the records falling within each bin,
is returned. For each bin, the start value is inclusive, but the end
value is exclusive--except for the very last bin for which the end value
is also inclusive. The value returned for each bin is the number of
records in it, except when a column name is provided as a VALUE_COLUMN
. In this latter case the sum of the values corresponding
to the VALUE_COLUMN
is used as the result instead. The total number of bins
requested cannot exceed 10,000.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based) build to service a request that specifies a VALUE_COLUMN
.
tableName
- Name of the table on which the operation will be
performed. Must be an existing table, in
[schema_name.]table_name format, using standard name resolution rules.columnName
- Name of a column or an expression of one or more
column names over which the histogram will be
calculated.start
- Lower end value of the histogram interval, inclusive.end
- Upper end value of the histogram interval, inclusive.interval
- The size of each bin within the start and end
parameters.options
- Optional parameters.
VALUE_COLUMN
: The name of the column to use
when calculating the bin values (values are
summed). The column must be a numerical type
(int, double, long, float).
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateKMeansResponse aggregateKMeans(AggregateKMeansRequest request) throws GPUdbException
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateKMeansResponse aggregateKMeans(String tableName, List<String> columnNames, int k, double tolerance, Map<String,String> options) throws GPUdbException
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.
tableName
- Name of the table on which the operation will be
performed. Must be an existing table, in
[schema_name.]table_name format, using standard name resolution rules.columnNames
- List of column names on which the operation would be
performed. If n columns are provided then each of
the k result points will have n dimensions
corresponding to the n columns.k
- The number of mean points to be determined by the algorithm.tolerance
- Stop iterating when the distances between successive
points is less than the given tolerance.options
- Optional parameters.
WHITEN
: When set to 1 each of the columns is
first normalized by its stdv - default is not to
whiten.
MAX_ITERS
: Number of times to try to hit the
tolerance limit before giving up - default is
10.
NUM_TRIES
: Number of times to run the k-means
algorithm with a different randomly selected
starting points - helps avoid local minimum.
Default is 1.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of RESULT_TABLE
. If RESULT_TABLE_PERSIST
is FALSE
(or unspecified), then this is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_RESULT_TABLE_NAME
.
Supported values:
The default value is FALSE
.
RESULT_TABLE
: The name of a table used to store
the results, in [schema_name.]table_name format,
using standard name resolution rules and
meeting table naming criteria. If
this option is specified, the results are not
returned in the response.
RESULT_TABLE_PERSIST
: If TRUE
, then the result table specified in RESULT_TABLE
will be persisted and will not
expire unless a TTL
is specified. If FALSE
, then the result table will be an
in-memory table and will expire unless a TTL
is specified otherwise.
Supported values:
The default value is FALSE
.
TTL
: Sets the TTL of the table specified in
RESULT_TABLE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateMinMaxResponse aggregateMinMax(AggregateMinMaxRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateMinMaxResponse aggregateMinMax(String tableName, String columnName, Map<String,String> options) throws GPUdbException
tableName
- Name of the table on which the operation will be
performed. Must be an existing table, in
[schema_name.]table_name format, using standard name resolution rules.columnName
- Name of a column or an expression of one or more
column on which the min-max will be calculated.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateMinMaxGeometryResponse aggregateMinMaxGeometry(AggregateMinMaxGeometryRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateMinMaxGeometryResponse aggregateMinMaxGeometry(String tableName, String columnName, Map<String,String> options) throws GPUdbException
tableName
- Name of the table on which the operation will be
performed. Must be an existing table, in
[schema_name.]table_name format, using standard name resolution rules.columnName
- Name of a geospatial geometry column on which the
min-max will be calculated.options
- Optional parameters. The default value is an empty
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateStatisticsResponse aggregateStatistics(AggregateStatisticsRequest request) throws GPUdbException
The available statistics are: COUNT
(number
of total objects), MEAN
, STDV
(standard
deviation), VARIANCE
,
SKEW
,
KURTOSIS
, SUM
, MIN
, MAX
, WEIGHTED_AVERAGE
, CARDINALITY
(unique count), ESTIMATED_CARDINALITY
, PERCENTILE
, and PERCENTILE_RANK
.
Estimated cardinality is calculated by using the hyperloglog approximation technique.
Percentiles and percentile ranks are approximate and are calculated
using the t-digest algorithm. They must include the desired PERCENTILE
/PERCENTILE_RANK
. To compute multiple percentiles each value must be
specified separately
(i.e. 'percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)').
A second, comma-separated value can be added to the PERCENTILE
statistic to calculate percentile resolution, e.g., a 50th
percentile with 200 resolution would be 'percentile(50,200)'.
The weighted average statistic requires a weight column to be specified
in WEIGHT_COLUMN_NAME
. The weighted average is then defined as the sum of
the products of columnName
times the WEIGHT_COLUMN_NAME
values divided by the sum of the WEIGHT_COLUMN_NAME
values.
Additional columns can be used in the calculation of statistics via
ADDITIONAL_COLUMN_NAMES
. Values in these columns will be included in
the overall aggregate calculation--individual aggregates will not be
calculated per additional column. For instance, requesting the COUNT
&
MEAN
of
columnName
x and ADDITIONAL_COLUMN_NAMES
y & z, where x holds the numbers 1-10, y holds
11-20, and z holds 21-30, would return the total number of x, y, & z
values (30), and the single average value across all x, y, & z values
(15.5).
The response includes a list of key/value pairs of each statistic requested and its corresponding value.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateStatisticsResponse aggregateStatistics(String tableName, String columnName, String stats, Map<String,String> options) throws GPUdbException
The available statistics are: COUNT
(number
of total objects), MEAN
, STDV
(standard
deviation), VARIANCE
,
SKEW
,
KURTOSIS
, SUM
, MIN
, MAX
, WEIGHTED_AVERAGE
, CARDINALITY
(unique count), ESTIMATED_CARDINALITY
, PERCENTILE
, and PERCENTILE_RANK
.
Estimated cardinality is calculated by using the hyperloglog approximation technique.
Percentiles and percentile ranks are approximate and are calculated
using the t-digest algorithm. They must include the desired PERCENTILE
/PERCENTILE_RANK
. To compute multiple percentiles each value must be
specified separately
(i.e. 'percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)').
A second, comma-separated value can be added to the PERCENTILE
statistic to calculate percentile resolution, e.g., a 50th
percentile with 200 resolution would be 'percentile(50,200)'.
The weighted average statistic requires a weight column to be specified
in WEIGHT_COLUMN_NAME
. The weighted average is then defined as the sum of
the products of columnName
times the WEIGHT_COLUMN_NAME
values divided by the sum of the WEIGHT_COLUMN_NAME
values.
Additional columns can be used in the calculation of statistics via
ADDITIONAL_COLUMN_NAMES
. Values in these columns will be included in
the overall aggregate calculation--individual aggregates will not be
calculated per additional column. For instance, requesting the COUNT
&
MEAN
of
columnName
x and ADDITIONAL_COLUMN_NAMES
y & z, where x holds the numbers 1-10, y holds
11-20, and z holds 21-30, would return the total number of x, y, & z
values (30), and the single average value across all x, y, & z values
(15.5).
The response includes a list of key/value pairs of each statistic requested and its corresponding value.
tableName
- Name of the table on which the statistics operation
will be performed, in [schema_name.]table_name format,
using standard name resolution rules.columnName
- Name of the primary column for which the statistics
are to be calculated.stats
- Comma separated list of the statistics to calculate, e.g.
"sum,mean".
Supported values:
COUNT
: Number of objects (independent of the
given column(s)).
MEAN
: Arithmetic mean (average), equivalent to
sum/count.
STDV
: Sample standard deviation (denominator is
count-1).
VARIANCE
: Unbiased sample variance (denominator
is count-1).
SKEW
: Skewness (third standardized moment).
KURTOSIS
: Kurtosis (fourth standardized moment).
SUM
: Sum of all values in the column(s).
MIN
: Minimum value of the column(s).
MAX
: Maximum value of the column(s).
WEIGHTED_AVERAGE
: Weighted arithmetic mean (using
the option WEIGHT_COLUMN_NAME
as the weighting column).
CARDINALITY
: Number of unique values in the
column(s).
ESTIMATED_CARDINALITY
: Estimate (via hyperloglog
technique) of the number of unique values in the
column(s).
PERCENTILE
: Estimate (via t-digest) of the given
percentile of the column(s) (percentile(50.0) will
be an approximation of the median). Add a second,
comma-separated value to calculate percentile
resolution, e.g., 'percentile(75,150)'
PERCENTILE_RANK
: Estimate (via t-digest) of the
percentile rank of the given value in the
column(s) (if the given value is the median of the
column(s), percentile_rank(<median>) will
return approximately 50.0).
options
- Optional parameters.
ADDITIONAL_COLUMN_NAMES
: A list of comma
separated column names over which statistics can
be accumulated along with the primary column.
All columns listed and columnName
must
be of the same type. Must not include the
column specified in columnName
and no
column can be listed twice.
WEIGHT_COLUMN_NAME
: Name of column used as
weighting attribute for the weighted average
statistic.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateStatisticsByRangeResponse aggregateStatisticsByRange(AggregateStatisticsByRangeRequest request) throws GPUdbException
WEIGHT_COLUMN_NAME
. The weighted average is then defined as the sum of
the products of the value column times the weight column divided by the
sum of the weight column.
There are two methods for binning the set members. In the first, which
can be used for numeric valued binning-columns, a min, max and interval
are specified. The number of bins, nbins, is the integer upper bound of
(max-min)/interval. Values that fall in the range
[min+n*interval,min+(n+1)*interval) are placed in the nth bin where n
ranges from 0..nbin-2. The final bin is [min+(nbin-1)*interval,max]. In
the second method, BIN_VALUES
specifies a list of binning column values. Binning-columns
whose value matches the nth member of the BIN_VALUES
list are placed in the nth bin. When a list is provided, the
binning-column must be of type string or int.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateStatisticsByRangeResponse aggregateStatisticsByRange(String tableName, String selectExpression, String columnName, String valueColumnName, String stats, double start, double end, double interval, Map<String,String> options) throws GPUdbException
WEIGHT_COLUMN_NAME
. The weighted average is then defined as the sum of
the products of the value column times the weight column divided by the
sum of the weight column.
There are two methods for binning the set members. In the first, which
can be used for numeric valued binning-columns, a min, max and interval
are specified. The number of bins, nbins, is the integer upper bound of
(max-min)/interval. Values that fall in the range
[min+n*interval,min+(n+1)*interval) are placed in the nth bin where n
ranges from 0..nbin-2. The final bin is [min+(nbin-1)*interval,max]. In
the second method, BIN_VALUES
specifies a list of binning column values. Binning-columns
whose value matches the nth member of the BIN_VALUES
list are placed in the nth bin. When a list is provided, the
binning-column must be of type string or int.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.
tableName
- Name of the table on which the ranged-statistics
operation will be performed, in
[schema_name.]table_name format, using standard name resolution rules.selectExpression
- For a non-empty expression statistics are
calculated for those records for which the
expression is true. The default value is ''.columnName
- Name of the binning-column used to divide the set
samples into bins.valueColumnName
- Name of the value-column for which statistics
are to be computed.stats
- A string of comma separated list of the statistics to
calculate, e.g. 'sum,mean'. Available statistics: mean,
stdv (standard deviation), variance, skew, kurtosis, sum.start
- The lower bound of the binning-column.end
- The upper bound of the binning-column.interval
- The interval of a bin. Set members fall into bin i if
the binning-column falls in the range
[start+interval*i, start+interval*(i+1)).options
- Map of optional parameters:
ADDITIONAL_COLUMN_NAMES
: A list of comma
separated value-column names over which
statistics can be accumulated along with the
primary value_column.
BIN_VALUES
: A list of comma separated
binning-column values. Values that match the nth
bin_values value are placed in the nth bin.
WEIGHT_COLUMN_NAME
: Name of the column used as
weighting column for the weighted_average
statistic.
ORDER_COLUMN_NAME
: Name of the column used for
candlestick charting techniques.
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public RawAggregateUniqueResponse aggregateUniqueRaw(AggregateUniqueRequest request) throws GPUdbException
columnName
) of a particular table or view (specified by tableName
). If
columnName
is a numeric column, the values will be in data
. Otherwise if
columnName
is a string column, the values will be in jsonEncodedResponse
. The results can be paged via offset
and limit
parameters.
Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order options
would be:
{"limit":"10","sort_order":"descending"}The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a RESULT_TABLE
name is specified in the options
, the
results are stored in a new table with that name--no results are
returned in the response. Both the table name and resulting column name
must adhere to standard naming conventions; any column expression
will need to be aliased. If the source table's shard
key is used as the columnName
,
the result table will be sharded, in all other cases it will be
replicated. Sorting will properly function only if the result table is
replicated or if there is only one processing node and should not be
relied upon in other cases. Not available if the value of columnName
is
an unrestricted-length string.
request
- Request
object containing
the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateUniqueResponse aggregateUnique(AggregateUniqueRequest request) throws GPUdbException
columnName
) of a particular table or view (specified by tableName
). If
columnName
is a numeric column, the values will be in data
. Otherwise if
columnName
is a string column, the values will be in jsonEncodedResponse
. The results can be paged via offset
and limit
parameters.
Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order options
would be:
{"limit":"10","sort_order":"descending"}The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a RESULT_TABLE
name is specified in the options
, the
results are stored in a new table with that name--no results are
returned in the response. Both the table name and resulting column name
must adhere to standard naming conventions; any column expression
will need to be aliased. If the source table's shard
key is used as the columnName
,
the result table will be sharded, in all other cases it will be
replicated. Sorting will properly function only if the result table is
replicated or if there is only one processing node and should not be
relied upon in other cases. Not available if the value of columnName
is
an unrestricted-length string.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateUniqueResponse aggregateUnique(String tableName, String columnName, long offset, long limit, Map<String,String> options) throws GPUdbException
columnName
) of a particular table or view (specified by tableName
). If columnName
is a numeric column, the values will
be in data
.
Otherwise if columnName
is a string column, the values will be
in jsonEncodedResponse
. The results can be paged via offset
and limit
parameters.
Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order options
would be:
{"limit":"10","sort_order":"descending"}The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a RESULT_TABLE
name is specified in the options
, the results are
stored in a new table with that name--no results are returned in the
response. Both the table name and resulting column name must adhere to
standard
naming conventions; any column expression will need to be aliased.
If the source table's shard
key is used as the columnName
, the result table will be
sharded, in all other cases it will be replicated. Sorting will
properly function only if the result table is replicated or if there is
only one processing node and should not be relied upon in other cases.
Not available if the value of columnName
is an
unrestricted-length string.
tableName
- Name of an existing table or view on which the
operation will be performed, in
[schema_name.]table_name format, using standard name resolution rules.columnName
- Name of the column or an expression containing one or
more column names on which the unique function would
be applied.offset
- A positive integer indicating the number of initial
results to skip (this can be useful for paging through
the results). The default value is 0. The minimum allowed
value is 0. The maximum allowed value is MAX_INT.limit
- A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to indicate
that the maximum number of results allowed by the server
should be returned. The number of records returned will
never exceed the server's own limit, defined by the max_get_records_size parameter in the
server configuration. Use hasMoreRecords
to see if more records exist in the result
to be fetched, and offset
& limit
to
request subsequent pages of results. The default value is
-9999.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of RESULT_TABLE
. If RESULT_TABLE_PERSIST
is FALSE
(or unspecified), then this is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_RESULT_TABLE_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema as part of RESULT_TABLE
and use createSchema
to
create the schema if non-existent] Name of a
schema which is to contain the table specified
in RESULT_TABLE
. If the schema provided is
non-existent, it will be automatically created.
EXPRESSION
: Optional filter expression to apply
to the table.
SORT_ORDER
: String indicating how the returned
values should be sorted.
Supported values:
The default value is ASCENDING
.
ORDER_BY
: Comma-separated list of the columns
to be sorted by as well as the sort direction,
e.g., 'timestamp asc, x desc'. The default value
is ''.
RESULT_TABLE
: The name of the table used to
store the results, in [schema_name.]table_name
format, using standard name resolution rules and
meeting table naming criteria. If
present, no results are returned in the
response. Not available if columnName
is an unrestricted-length string.
RESULT_TABLE_PERSIST
: If TRUE
, then the result table specified in RESULT_TABLE
will be persisted and will not
expire unless a TTL
is specified. If FALSE
, then the result table will be an
in-memory table and will expire unless a TTL
is specified otherwise.
Supported values:
The default value is FALSE
.
RESULT_TABLE_FORCE_REPLICATED
: Force the result
table to be replicated (ignores any sharding).
Must be used in combination with the RESULT_TABLE
option.
Supported values:
The default value is FALSE
.
RESULT_TABLE_GENERATE_PK
: If TRUE
then set a primary key for the result
table. Must be used in combination with the
RESULT_TABLE
option.
Supported values:
The default value is FALSE
.
TTL
: Sets the TTL of the table specified in
RESULT_TABLE
.
CHUNK_SIZE
: Indicates the number of records per
chunk to be used for the result table. Must be
used in combination with the RESULT_TABLE
option.
CHUNK_COLUMN_MAX_MEMORY
: Indicates the target
maximum data size for each column in a chunk to
be used for the result table. Must be used in
combination with the RESULT_TABLE
option.
CHUNK_MAX_MEMORY
: Indicates the target maximum
data size for all columns in a chunk to be used
for the result table. Must be used in
combination with the RESULT_TABLE
option.
VIEW_ID
: ID of view of which the result table
will be a member. The default value is ''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public RawAggregateUnpivotResponse aggregateUnpivotRaw(AggregateUnpivotRequest request) throws GPUdbException
For unpivot details and examples, see Unpivot. For limitations, see Unpivot Limitations.
Unpivot is used to normalize tables that are built for cross tabular reporting purposes. The unpivot operator rotates the column values for all the pivoted columns. A variable column, value column and all columns from the source table except the unpivot columns are projected into the result table. The variable column and value columns in the result table indicate the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateUnpivotResponse aggregateUnpivot(AggregateUnpivotRequest request) throws GPUdbException
For unpivot details and examples, see Unpivot. For limitations, see Unpivot Limitations.
Unpivot is used to normalize tables that are built for cross tabular reporting purposes. The unpivot operator rotates the column values for all the pivoted columns. A variable column, value column and all columns from the source table except the unpivot columns are projected into the result table. The variable column and value columns in the result table indicate the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AggregateUnpivotResponse aggregateUnpivot(String tableName, List<String> columnNames, String variableColumnName, String valueColumnName, List<String> pivotedColumns, Map<String,String> options) throws GPUdbException
For unpivot details and examples, see Unpivot. For limitations, see Unpivot Limitations.
Unpivot is used to normalize tables that are built for cross tabular reporting purposes. The unpivot operator rotates the column values for all the pivoted columns. A variable column, value column and all columns from the source table except the unpivot columns are projected into the result table. The variable column and value columns in the result table indicate the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
tableName
- Name of the table on which the operation will be
performed. Must be an existing table/view, in
[schema_name.]table_name format, using standard name resolution rules.columnNames
- List of column names or expressions. A wildcard '*'
can be used to include all the non-pivoted columns
from the source table.variableColumnName
- Specifies the variable/parameter column name.
The default value is ''.valueColumnName
- Specifies the value column name. The default
value is ''.pivotedColumns
- List of one or more values typically the column
names of the input table. All the columns in the
source table must have the same data type.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of RESULT_TABLE
. If RESULT_TABLE_PERSIST
is FALSE
(or unspecified), then this is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_RESULT_TABLE_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema as part of RESULT_TABLE
and use createSchema
to
create the schema if non-existent] Name of a
schema which is to contain the table specified
in RESULT_TABLE
. If the schema is non-existent, it
will be automatically created.
RESULT_TABLE
: The name of a table used to store
the results, in [schema_name.]table_name format,
using standard name resolution rules and
meeting table naming criteria. If
present, no results are returned in the
response.
RESULT_TABLE_PERSIST
: If TRUE
, then the result table specified in RESULT_TABLE
will be persisted and will not
expire unless a TTL
is specified. If FALSE
, then the result table will be an
in-memory table and will expire unless a TTL
is specified otherwise.
Supported values:
The default value is FALSE
.
EXPRESSION
: Filter expression to apply to the
table prior to unpivot processing.
ORDER_BY
: Comma-separated list of the columns
to be sorted by; e.g. 'timestamp asc, x desc'.
The columns specified must be present in input
table. If any alias is given for any column
name, the alias must be used, rather than the
original column name. The default value is ''.
CHUNK_SIZE
: Indicates the number of records per
chunk to be used for the result table. Must be
used in combination with the RESULT_TABLE
option.
CHUNK_COLUMN_MAX_MEMORY
: Indicates the target
maximum data size for each column in a chunk to
be used for the result table. Must be used in
combination with the RESULT_TABLE
option.
CHUNK_MAX_MEMORY
: Indicates the target maximum
data size for all columns in a chunk to be used
for the result table. Must be used in
combination with the RESULT_TABLE
option.
LIMIT
: The number of records to keep. The
default value is ''.
TTL
: Sets the TTL of the table specified in
RESULT_TABLE
.
VIEW_ID
: view this result table is part of. The
default value is ''.
CREATE_INDEXES
: Comma-separated list of columns
on which to create indexes on the table
specified in RESULT_TABLE
. The columns specified must be
present in output column names. If any alias is
given for any column name, the alias must be
used, rather than the original column name.
RESULT_TABLE_FORCE_REPLICATED
: Force the result
table to be replicated (ignores any sharding).
Must be used in combination with the RESULT_TABLE
option.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterCredentialResponse alterCredential(AlterCredentialRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterCredentialResponse alterCredential(String credentialName, Map<String,String> credentialUpdatesMap, Map<String,String> options) throws GPUdbException
credentialName
- Name of the credential to be altered. Must be an
existing credential.credentialUpdatesMap
- Map containing the properties of the
credential to be updated. Error if empty.
TYPE
: New type for the credential.
Supported values:
IDENTITY
: New user for the
credential
SECRET
: New password for the
credential
SCHEMA_NAME
: Updates the schema
name. If SCHEMA_NAME
doesn't exist, an
error will be thrown. If SCHEMA_NAME
is empty, then the
user's default schema will be used.
options
- Optional parameters.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterDatasinkResponse alterDatasink(AlterDatasinkRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterDatasinkResponse alterDatasink(String name, Map<String,String> datasinkUpdatesMap, Map<String,String> options) throws GPUdbException
name
- Name of the data sink to be altered. Must be an existing
data sink.datasinkUpdatesMap
- Map containing the properties of the data
sink to be updated. Error if empty.
DESTINATION
: Destination for the
output data in format
'destination_type://path[:port]'.
Supported destination types are
'azure', 'gcs', 'hdfs', 'http',
'https', 'jdbc', 'kafka', and 's3'.
CONNECTION_TIMEOUT
: Timeout in
seconds for connecting to this sink
WAIT_TIMEOUT
: Timeout in seconds for
waiting for a response from this sink
CREDENTIAL
: Name of the credential object
to be used in this data sink
S3_BUCKET_NAME
: Name of the Amazon
S3 bucket to use as the data sink
S3_REGION
: Name of the Amazon S3
region where the given bucket is
located
S3_VERIFY_SSL
: Whether to verify SSL
connections.
Supported values:
TRUE
: Connect with SSL
verification
FALSE
: Connect without
verifying the SSL connection;
for testing purposes,
bypassing TLS errors,
self-signed certificates,
etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether
to use virtual addressing when
referencing the Amazon S3 sink.
Supported values:
TRUE
: The requests URI
should be specified in
virtual-hosted-style format
where the bucket name is part
of the domain name in the
URL.
FALSE
: Use path-style URI
for requests.
TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role ARN
which has required S3 permissions
that can be assumed for the given S3
IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
:
Customer encryption algorithm used
encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer
encryption key to encrypt or decrypt
data
S3_ENCRYPTION_TYPE
: Server side
encryption type
S3_KMS_KEY_ID
: KMS key
HDFS_KERBEROS_KEYTAB
: Kerberos
keytab file location for the given
HDFS user. This may be a KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation
token for the given HDFS user
HDFS_USE_KERBEROS
: Use kerberos
authentication for the given HDFS
cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of
the Azure storage account to use as
the data sink, this is valid only if
tenant_id is specified
AZURE_CONTAINER_NAME
: Name of the
Azure storage container to use as the
data sink
AZURE_TENANT_ID
: Active Directory
tenant ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access
signature token for Azure storage
account to use as the data sink
AZURE_OAUTH_TOKEN
: Oauth token to
access given storage container
GCS_BUCKET_NAME
: Name of the Google
Cloud Storage bucket to use as the
data sink
GCS_PROJECT_ID
: Name of the Google
Cloud project to use as the data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google
Cloud service account keys to use for
authenticating the data sink
JDBC_DRIVER_JAR_PATH
: JDBC driver
jar file location. This may be a
KIFS file.
JDBC_DRIVER_CLASS_NAME
: Name of the
JDBC driver class
KAFKA_URL
: The publicly-accessible
full path URL to the kafka broker,
e.g., 'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME
: Name of the Kafka
topic to use for this data sink, if
it references a Kafka broker
ANONYMOUS
: Create an anonymous
connection to the storage
provider--DEPRECATED: this is now the
default. Specify
use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
: When no
credentials are supplied, we use
anonymous access by default. If this
is set, we will use cloud provider
user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to
datasink if true, otherwise use http.
Supported values:
The default value is TRUE
.
MAX_BATCH_SIZE
: Maximum number of
records per notification message. The
default value is '1'.
MAX_MESSAGE_SIZE
: Maximum size in
bytes of each notification message.
The default value is '1000000'.
JSON_FORMAT
: The desired format of
JSON encoded notifications message.
Supported values:
The default value is FLAT
.
SKIP_VALIDATION
: Bypass validation
of connection to this data sink.
Supported values:
The default value is FALSE
.
SCHEMA_NAME
: Updates the schema
name. If SCHEMA_NAME
doesn't exist, an error
will be thrown. If SCHEMA_NAME
is empty, then the
user's default schema will be used.
options
- Optional parameters.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterDatasourceResponse alterDatasource(AlterDatasourceRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterDatasourceResponse alterDatasource(String name, Map<String,String> datasourceUpdatesMap, Map<String,String> options) throws GPUdbException
name
- Name of the data source to be altered. Must be an existing
data source.datasourceUpdatesMap
- Map containing the properties of the data
source to be updated. Error if empty.
LOCATION
: Location of the remote
storage in
'storage_provider_type://[storage_path[:storage_port]]'
format. Supported storage provider
types are 'azure', 'gcs', 'hdfs',
'jdbc', 'kafka', 'confluent', and
's3'.
USER_NAME
: Name of the remote
system user; may be an empty string
PASSWORD
: Password for the remote
system user; may be an empty string
SKIP_VALIDATION
: Bypass validation
of connection to remote source.
Supported values:
The default value is FALSE
.
CONNECTION_TIMEOUT
: Timeout in
seconds for connecting to this
storage provider
WAIT_TIMEOUT
: Timeout in seconds
for reading from this storage
provider
CREDENTIAL
: Name of the credential object
to be used in data source
S3_BUCKET_NAME
: Name of the Amazon
S3 bucket to use as the data source
S3_REGION
: Name of the Amazon S3
region where the given bucket is
located
S3_VERIFY_SSL
: Whether to verify
SSL connections.
Supported values:
TRUE
: Connect with SSL
verification
FALSE
: Connect without
verifying the SSL
connection; for testing
purposes, bypassing TLS
errors, self-signed
certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether
to use virtual addressing when
referencing the Amazon S3 source.
Supported values:
TRUE
: The requests URI
should be specified in
virtual-hosted-style format
where the bucket name is
part of the domain name in
the URL.
FALSE
: Use path-style URI
for requests.
TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role
ARN which has required S3
permissions that can be assumed for
the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
:
Customer encryption algorithm used
encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
:
Customer encryption key to encrypt
or decrypt data
HDFS_KERBEROS_KEYTAB
: Kerberos
keytab file location for the given
HDFS user. This may be a KIFS
file.
HDFS_DELEGATION_TOKEN
: Delegation
token for the given HDFS user
HDFS_USE_KERBEROS
: Use kerberos
authentication for the given HDFS
cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name
of the Azure storage account to use
as the data source, this is valid
only if tenant_id is specified
AZURE_CONTAINER_NAME
: Name of the
Azure storage container to use as
the data source
AZURE_TENANT_ID
: Active Directory
tenant ID (or directory ID)
AZURE_SAS_TOKEN
: Shared access
signature token for Azure storage
account to use as the data source
AZURE_OAUTH_TOKEN
: OAuth token to
access given storage container
GCS_BUCKET_NAME
: Name of the
Google Cloud Storage bucket to use
as the data source
GCS_PROJECT_ID
: Name of the Google
Cloud project to use as the data
source
GCS_SERVICE_ACCOUNT_KEYS
: Google
Cloud service account keys to use
for authenticating the data source
JDBC_DRIVER_JAR_PATH
: JDBC driver
jar file location. This may be a
KIFS file.
JDBC_DRIVER_CLASS_NAME
: Name of
the JDBC driver class
KAFKA_URL
: The publicly-accessible
full path URL to the Kafka broker,
e.g., 'http://172.123.45.67:9300'.
KAFKA_TOPIC_NAME
: Name of the
Kafka topic to use as the data
source
ANONYMOUS
: Create an anonymous
connection to the storage
provider--DEPRECATED: this is now
the default. Specify
use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
: When no
credentials are supplied, we use
anonymous access by default. If
this is set, we will use cloud
provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to
datasource if true, otherwise use
http.
Supported values:
The default value is TRUE
.
SCHEMA_NAME
: Updates the schema
name. If SCHEMA_NAME
doesn't exist, an
error will be thrown. If SCHEMA_NAME
is empty, then the
user's default schema will be used.
SCHEMA_REGISTRY_LOCATION
: Location
of Confluent Schema Registry in
'[storage_path[:storage_port]]'
format.
SCHEMA_REGISTRY_CREDENTIAL
:
Confluent Schema Registry credential object
name.
SCHEMA_REGISTRY_PORT
: Confluent
Schema Registry port (optional).
options
- Optional parameters.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterDirectoryResponse alterDirectory(AlterDirectoryRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterDirectoryResponse alterDirectory(String directoryName, Map<String,String> directoryUpdatesMap, Map<String,String> options) throws GPUdbException
directoryName
- Name of the directory in KiFS to be altered.directoryUpdatesMap
- Map containing the properties of the
directory to be altered. Error if empty.
DATA_LIMIT
: The maximum capacity,
in bytes, to apply to the directory.
Set to -1 to indicate no upper
limit.
options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterEnvironmentResponse alterEnvironment(AlterEnvironmentRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterEnvironmentResponse alterEnvironment(String environmentName, String action, String value, Map<String,String> options) throws GPUdbException
environmentName
- Name of the environment to be altered.action
- Modification operation to be applied.
Supported values:
INSTALL_PACKAGE
: Install a python package from
PyPI, an external data source or KiFS
INSTALL_REQUIREMENTS
: Install packages from a
requirements file
UNINSTALL_PACKAGE
: Uninstall a python package.
UNINSTALL_REQUIREMENTS
: Uninstall packages from
a requirements file
RESET
: Uninstalls all packages in the
environment and resets it to the original state
at time of creation
REBUILD
: Recreates the environment and
re-installs all packages, upgrades the packages
if necessary based on dependencies
value
- The value of the modification, depending on action
. For example, if action
is INSTALL_PACKAGE
, this would be the python package name.
If action
is INSTALL_REQUIREMENTS
, this would be the path of a
requirements file from which to install packages. If an
external data source is specified in DATASOURCE_NAME
, this can be the path to a wheel file or
source archive. Alternatively, if installing from a file
(wheel or source archive), the value may be a reference to
a file in KiFS.options
- Optional parameters.
DATASOURCE_NAME
: Name of an existing external
data source from which packages specified in
value
can be loaded
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterGraphResponse alterGraph(AlterGraphRequest request) throws GPUdbException
GPUdbException
public AlterGraphResponse alterGraph(String graphName, String action, String actionArg, Map<String,String> options) throws GPUdbException
GPUdbException
public AlterModelResponse alterModel(AlterModelRequest request) throws GPUdbException
GPUdbException
public AlterModelResponse alterModel(String modelName, String action, String value, Map<String,String> options) throws GPUdbException
GPUdbException
public AlterResourceGroupResponse alterResourceGroup(AlterResourceGroupRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AlterResourceGroupResponse alterResourceGroup(String name, Map<String,Map<String,String>> tierAttributes, String ranking, String adjoiningResourceGroup, Map<String,String> options) throws GPUdbException
name
- Name of the group to be altered. Must be an existing
resource group name or an empty string when used
inconjunction with the is_default_group option.tierAttributes
- Optional map containing tier names and their
respective attribute group limits. The only
valid attribute limit that can be set is
max_memory (in bytes) for the VRAM & RAM tiers.
For instance, to set max VRAM capacity to 1GB and
max RAM capacity to 10GB, use:
{'VRAM':{'max_memory':'1000000000'},
'RAM':{'max_memory':'10000000000'}}.
MAX_MEMORY
: Maximum amount of memory
usable in the given tier at one time for
this group.
Map
.ranking
- If the resource group ranking is to be updated, this
indicates the relative ranking among existing resource
groups where this resource group will be moved; leave
blank if not changing the ranking. When using BEFORE
or AFTER
, specify which resource group this one will be
inserted before or after in adjoiningResourceGroup
.
Supported values:
The default value is EMPTY_STRING
.adjoiningResourceGroup
- If ranking
is BEFORE
or AFTER
, this field indicates the resource
group before or after which the current
group will be placed; otherwise, leave
blank. The default value is ''.options
- Optional parameters.
MAX_CPU_CONCURRENCY
: Maximum number of
simultaneous threads that will be used to
execute a request for this group. The minimum
allowed value is '4'.
MAX_DATA
: Maximum amount of cumulative ram
usage regardless of tier status for this group.
The minimum allowed value is '-1'.
MAX_SCHEDULING_PRIORITY
: Maximum priority of a
scheduled task for this group. The minimum
allowed value is '1'. The maximum allowed value
is '100'.
MAX_TIER_PRIORITY
: Maximum priority of a tiered
object for this group. The minimum allowed value
is '1'. The maximum allowed value is '10'.
IS_DEFAULT_GROUP
: If TRUE
, this request applies to the global
default resource group. It is an error for this
field to be TRUE
when the name
field is also
populated.
Supported values:
The default value is FALSE
.
PERSIST
: If TRUE
and a system-level change was requested,
the system configuration will be written to disk
upon successful application of this request.
This will commit the changes from this request
and any additional in-memory modifications.
Supported values:
The default value is TRUE
.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AlterRoleResponse alterRole(AlterRoleRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public AlterRoleResponse alterRole(String name, String action, String value, Map<String,String> options) throws GPUdbException
name
- Name of the role to be altered. Must be an existing role.action
- Modification operation to be applied to the role.
Supported values:
SET_COMMENT
: Sets the comment for an internal
role.
SET_RESOURCE_GROUP
: Sets the resource group for
an internal role. The resource group must exist,
otherwise, an empty string assigns the role to
the default resource group.
value
- The value of the modification, depending on action
.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public AlterSchemaResponse alterSchema(AlterSchemaRequest request) throws GPUdbException
schemaName
.request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterSchemaResponse alterSchema(String schemaName, String action, String value, Map<String,String> options) throws GPUdbException
schemaName
.schemaName
- Name of the schema to be altered.action
- Modification operation to be applied.
Supported values:
ADD_COMMENT
: Adds a comment describing the
schema
RENAME_SCHEMA
: Renames a schema to value
. Has the same naming restrictions as tables.
value
- The value of the modification, depending on action
. For now the only value of action
is
RENAME_SCHEMA
. In this case the value is the new name of
the schema.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterSystemPropertiesResponse alterSystemProperties(AlterSystemPropertiesRequest request) throws GPUdbException
alterSystemProperties
endpoint is primarily used to simplify the
testing of the system and is not expected to be used during normal
execution. Commands are given through the propertyUpdatesMap
whose keys are commands and values are strings
representing integer values (for example '8000') or boolean values
('true' or 'false').request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AlterSystemPropertiesResponse alterSystemProperties(Map<String,String> propertyUpdatesMap, Map<String,String> options) throws GPUdbException
alterSystemProperties
endpoint is primarily used to simplify the testing of the system and is
not expected to be used during normal execution. Commands are given
through the propertyUpdatesMap
whose keys are commands and
values are strings representing integer values (for example '8000') or
boolean values ('true' or 'false').propertyUpdatesMap
- Map containing the properties of the system
to be updated. Error if empty.
CONCURRENT_KERNEL_EXECUTION
: Enables
concurrent kernel execution if the
value is TRUE
and disables it if the value is
FALSE
.
Supported values:
SUBTASK_CONCURRENCY_LIMIT
: Sets the
maximum number of simultaneous
threads allocated to a given request,
on each rank. Note that thread
allocation may also be limted by
resource group limits and/or system
load.
CHUNK_SIZE
: Sets the number of
records per chunk to be used for all
new tables.
CHUNK_COLUMN_MAX_MEMORY
: Sets the
target maximum data size for each
column in a chunk to be used for all
new tables.
CHUNK_MAX_MEMORY
: Indicates the
target maximum data size for all
columns in a chunk to be used for all
new tables.
EVICT_COLUMNS
: Attempts to evict
columns from memory to the persistent
store. Value string is a semicolon
separated list of entries, each entry
being a table name optionally
followed by a comma and a comma
separated list of column names to
attempt to evict. An empty value
string will attempt to evict all
tables and columns.
EXECUTION_MODE
: Sets the
execution_mode for kernel executions
to the specified string value.
Possible values are host, device,
default (engine decides) or an
integer value that indicates max
chunk size to exec on host
EXTERNAL_FILES_DIRECTORY
: Sets the
root directory path where external
table data files are accessed from.
Path must exist on the head node
FLUSH_TO_DISK
: Flushes any changes
to any tables to the persistent
store. These changes include updates
to the vector store, object store,
and text search store. Value string
can be 'true', 'false' or
'text_search' to flush the text
search store only.
CLEAR_CACHE
: Clears cached results.
Useful to allow repeated timing of
endpoints. Value string is the name
of the table for which to clear the
cached results, or an empty string to
clear the cached results for all
tables.
COMMUNICATOR_TEST
: Invoke the
communicator test and report timing
results. Value string is a semicolon
separated list of [key]=[value]
expressions. Expressions are:
num_transactions=[num] where num is
the number of request reply
transactions to invoke per test;
message_size=[bytes] where bytes is
the size in bytes of the messages to
send; check_values=[enabled] where if
enabled is true the value of the
messages received are verified.
NETWORK_SPEED
: Invoke the network
speed test and report timing results.
Value string is a semicolon-separated
list of [key]=[value] expressions.
Valid expressions are: seconds=[time]
where time is the time in seconds to
run the test; data_size=[bytes] where
bytes is the size in bytes of the
block to be transferred;
threads=[number of threads];
to_ranks=[space-separated list of
ranks] where the list of ranks is the
ranks that rank 0 will send data to
and get data from. If to_ranks is
unspecified then all worker ranks are
used.
REQUEST_TIMEOUT
: Number of minutes
after which filtering (e.g., filter
) and aggregating (e.g.,
aggregateGroupBy
) queries will
timeout. The default value is '20'.
The minimum allowed value is '0'. The
maximum allowed value is '1440'.
MAX_GET_RECORDS_SIZE
: The maximum
number of records the database will
serve for a given data retrieval
call. The default value is '20000'.
The minimum allowed value is '0'. The
maximum allowed value is '1000000'.
MAX_GRBC_BATCH_SIZE
:
<DEVELOPER>
ENABLE_AUDIT
: Enable or disable
auditing.
AUDIT_HEADERS
: Enable or disable
auditing of request headers.
AUDIT_BODY
: Enable or disable
auditing of request bodies.
AUDIT_DATA
: Enable or disable
auditing of request data.
AUDIT_RESPONSE
: Enable or disable
auditing of response information.
SHADOW_AGG_SIZE
: Size of the shadow
aggregate chunk cache in bytes. The
default value is '10000000'. The
minimum allowed value is '0'. The
maximum allowed value is
'2147483647'.
SHADOW_FILTER_SIZE
: Size of the
shadow filter chunk cache in bytes.
The default value is '10000000'. The
minimum allowed value is '0'. The
maximum allowed value is
'2147483647'.
SYNCHRONOUS_COMPRESSION
: compress
vector on set_compression (instead of
waiting for background thread). The
default value is 'false'.
ENABLE_OVERLAPPED_EQUI_JOIN
: Enable
overlapped-equi-join filter. The
default value is 'true'.
ENABLE_ONE_STEP_COMPOUND_EQUI_JOIN
:
Enable the one_step
compound-equi-join algorithm. The
default value is 'true'.
KAFKA_BATCH_SIZE
: Maximum number of
records to be ingested in a single
batch. The default value is '1000'.
The minimum allowed value is '1'. The
maximum allowed value is '10000000'.
KAFKA_POLL_TIMEOUT
: Maximum time
(milliseconds) for each poll to get
records from kafka. The default value
is '0'. The minimum allowed value is
'0'. The maximum allowed value is
'1000'.
KAFKA_WAIT_TIME
: Maximum time
(seconds) to buffer records received
from kafka before ingestion. The
default value is '30'. The minimum
allowed value is '1'. The maximum
allowed value is '120'.
EGRESS_PARQUET_COMPRESSION
: Parquet
file compression type.
Supported values:
The default value is SNAPPY
.
EGRESS_SINGLE_FILE_MAX_SIZE
: Max
file size (in MB) to allow saving to
a single file. May be overridden by
target limitations. The default value
is '10000'. The minimum allowed value
is '1'. The maximum allowed value is
'200000'.
MAX_CONCURRENT_KERNELS
: Sets the
max_concurrent_kernels value of the
conf. The minimum allowed value is
'0'. The maximum allowed value is
'256'.
SYSTEM_METADATA_RETENTION_PERIOD
:
Sets the
system_metadata.retention_period
value of the conf. The minimum
allowed value is '1'.
TCS_PER_TOM
: Sets the tcs_per_tom
value of the conf. The minimum
allowed value is '2'. The maximum
allowed value is '8192'.
TPS_PER_TOM
: Sets the tps_per_tom
value of the conf. The minimum
allowed value is '2'. The maximum
allowed value is '8192'.
BACKGROUND_WORKER_THREADS
: Size of
the worker rank background thread
pool. This includes background
operations such as watermark
evictions catalog table updates. The
minimum allowed value is '1'. The
maximum allowed value is '8192'.
AI_ENABLE_RAG
: Enable RAG. The
default value is 'false'.
AI_API_PROVIDER
: AI API provider
type
AI_API_URL
: AI API URL
AI_API_KEY
: AI API key
AI_API_CONNECTION_TIMEOUT
: AI API
connection timeout in seconds
AI_API_EMBEDDINGS_MODEL
: AI API
model name
TELM_PERSIST_QUERY_METRICS
: Enable
or disable persisting of query
metrics.
POSTGRES_PROXY_IDLE_CONNECTION_TIMEOUT
:
Idle connection timeout in seconds
POSTGRES_PROXY_KEEP_ALIVE
: Enable
postgres proxy keep alive. The
default value is 'false'.
KIFS_DIRECTORY_DATA_LIMIT
: The
default maximum capacity to apply
when creating a KiFS directory
(bytes). The minimum allowed value is
'-1'.
options
- Optional parameters.
EVICT_TO_COLD
: If TRUE
and evict_columns is specified, the given
objects will be evicted to cold storage (if such
a tier exists).
Supported values:
PERSIST
: If TRUE
the system configuration will be written
to disk upon successful application of this
request. This will commit the changes from this
request and any additional in-memory
modifications.
Supported values:
The default value is TRUE
.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AlterTableResponse alterTable(AlterTableRequest request) throws GPUdbException
Manage a table's columns--a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.
External tables cannot be modified except for their refresh method.
Create or delete a column, low-cardinality index, chunk skip, geospatial, CAGRA, or HNSW index. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Create or delete a foreign key on a particular column.
Manage a range-partitioned or a manual list-partitioned table's partitions.
Set (or reset) the tier strategy of a table or view.
Refresh and manage the refresh mode of a materialized view or an external table.
Set the time-to-live (TTL). This can be applied to tables or views.
Set the global access mode (i.e. locking) for a table. This setting trumps any role-based access controls that may be in place; e.g., a user with write access to a table marked read-only will not be able to insert records into it. The mode can be set to read-only, write-only, read/write, and no access.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterTableResponse alterTable(String tableName, String action, String value, Map<String,String> options) throws GPUdbException
Manage a table's columns--a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.
External tables cannot be modified except for their refresh method.
Create or delete a column, low-cardinality index, chunk skip, geospatial, CAGRA, or HNSW index. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Create or delete a foreign key on a particular column.
Manage a range-partitioned or a manual list-partitioned table's partitions.
Set (or reset) the tier strategy of a table or view.
Refresh and manage the refresh mode of a materialized view or an external table.
Set the time-to-live (TTL). This can be applied to tables or views.
Set the global access mode (i.e. locking) for a table. This setting trumps any role-based access controls that may be in place; e.g., a user with write access to a table marked read-only will not be able to insert records into it. The mode can be set to read-only, write-only, read/write, and no access.
tableName
- Table on which the operation will be performed, in
[schema_name.]table_name format, using standard name resolution rules. Must be an
existing table or view.action
- Modification operation to be applied.
Supported values:
ALLOW_HOMOGENEOUS_TABLES
: No longer supported;
action will be ignored.
CREATE_INDEX
: Creates a column (attribute) index, low-cardinality index, chunk skip index, geospatial index, CAGRA index, or HNSW index (depending on the
specified INDEX_TYPE
), on the column name specified in
value
. If this column already has the
specified index, an error will be returned.
REFRESH_INDEX
: Refreshes an index identified by
INDEX_TYPE
, on the column name specified in
value
. Currently applicable only to CAGRA
indices.
DELETE_INDEX
: Deletes a column (attribute) index, low-cardinality index, chunk skip index, geospatial index, CAGRA index, or HNSW index (depending on the
specified INDEX_TYPE
), on the column name specified in
value
. If this column does not have the
specified index, an error will be returned.
MOVE_TO_COLLECTION
: [DEPRECATED--please use
MOVE_TO_SCHEMA
and use createSchema
to
create the schema if non-existent] Moves a table
or view into a schema named value
. If
the schema provided is non-existent, it will be
automatically created.
MOVE_TO_SCHEMA
: Moves a table or view into a
schema named value
. If the schema
provided is nonexistent, an error will be thrown.
If value
is empty, then the table or view
will be placed in the user's default schema.
PROTECTED
: No longer used. Previously set
whether the given tableName
should be
protected or not. The value
would have
been either 'true' or 'false'.
RENAME_TABLE
: Renames a table or view to value
. Has the same naming restrictions as tables.
TTL
: Sets the time-to-live in minutes of the
table or view specified in tableName
.
ADD_COMMENT
: Adds the comment specified in
value
to the table specified in tableName
. Use COLUMN_NAME
to set the comment for a column.
ADD_COLUMN
: Adds the column specified in value
to the table specified in tableName
. Use COLUMN_TYPE
and COLUMN_PROPERTIES
in options
to set the
column's type and properties, respectively.
CHANGE_COLUMN
: Changes type and properties of
the column specified in value
. Use COLUMN_TYPE
and COLUMN_PROPERTIES
in options
to set the
column's type and properties, respectively. Note
that primary key and/or shard key columns cannot
be changed. All unchanging column properties must
be listed for the change to take place, e.g., to
add dictionary encoding to an existing 'char4'
column, both 'char4' and 'dict' must be specified
in the options
map.
SET_COLUMN_COMPRESSION
: No longer supported;
action will be ignored.
DELETE_COLUMN
: Deletes the column specified in
value
from the table specified in tableName
.
CREATE_FOREIGN_KEY
: Creates a foreign key specified in value
using the format '(source_column_name [,
...]) references
target_table_name(primary_key_column_name [,
...]) [as foreign_key_name]'.
DELETE_FOREIGN_KEY
: Deletes a foreign key. The value
should be the foreign_key_name specified when
creating the key or the complete string used to
define it.
ADD_PARTITION
: Adds the partition specified in
value
, to either a range-partitioned or manual list-partitioned table.
REMOVE_PARTITION
: Removes the partition
specified in value
(and relocates all of
its data to the default partition) from either a
range-partitioned or manual list-partitioned table.
DELETE_PARTITION
: Deletes the partition
specified in value
(and all of its data)
from either a range-partitioned or manual list-partitioned table.
SET_GLOBAL_ACCESS_MODE
: Sets the global access
mode (i.e. locking) for the table specified in
tableName
. Specify the access mode in
value
. Valid modes are 'no_access',
'read_only', 'write_only' and 'read_write'.
REFRESH
: For a materialized view, replays all
the table creation commands required to create
the view. For an external table, reloads all
data in the table from its associated source
files or data source.
SET_REFRESH_METHOD
: For a materialized view, sets the
method by which the view is refreshed to the
method specified in value
- one of
'manual', 'periodic', or 'on_change'. For an external table, sets the method
by which the table is refreshed to the method
specified in value
- either 'manual' or
'on_start'.
SET_REFRESH_START_TIME
: Sets the time to start
periodic refreshes of this materialized view to the
datetime string specified in value
with
format 'YYYY-MM-DD HH:MM:SS'. Subsequent
refreshes occur at the specified time + N * the
refresh period.
SET_REFRESH_STOP_TIME
: Sets the time to stop
periodic refreshes of this materialized view to the
datetime string specified in value
with
format 'YYYY-MM-DD HH:MM:SS'.
SET_REFRESH_PERIOD
: Sets the time interval in
seconds at which to refresh this materialized view to the value
specified in value
. Also, sets the
refresh method to periodic if not already set.
SET_REFRESH_SPAN
: Sets the future time-offset(in
seconds) for the view refresh to stop.
SET_REFRESH_EXECUTE_AS
: Sets the user name to
refresh this materialized view to the value
specified in value
.
REMOVE_TEXT_SEARCH_ATTRIBUTES
: Removes text search attribute from all
columns.
REMOVE_SHARD_KEYS
: Removes the shard key
property from all columns, so that the table will
be considered randomly sharded. The data is not
moved. The value
is ignored.
SET_STRATEGY_DEFINITION
: Sets the tier strategy for the table and
its columns to the one specified in value
, replacing the existing tier strategy in
its entirety.
CANCEL_DATASOURCE_SUBSCRIPTION
: Permanently
unsubscribe a data source that is loading
continuously as a stream. The data source can be
Kafka / S3 / Azure.
PAUSE_DATASOURCE_SUBSCRIPTION
: Temporarily
unsubscribe a data source that is loading
continuously as a stream. The data source can be
Kafka / S3 / Azure.
RESUME_DATASOURCE_SUBSCRIPTION
: Resubscribe to a
paused data source subscription. The data source
can be Kafka / S3 / Azure.
CHANGE_OWNER
: Change the owner resource group of
the table.
SET_LOAD_VECTORS_POLICY
: Set startup data
loading scheme for the table; see description of
'load_vectors_policy' in createTable
for possible values for value
SET_BUILD_PK_INDEX_POLICY
: Set startup primary
key generation scheme for the table; see
description of 'build_pk_index_policy' in createTable
for possible values for value
SET_BUILD_MATERIALIZED_VIEW_POLICY
: Set startup
rebuilding scheme for the materialized view; see
description of 'build_materialized_view_policy'
in createMaterializedView
for possible values
for value
value
- The value of the modification, depending on action
. For example, if action
is ADD_COLUMN
, this would be the column name; while the
column's definition would be covered by the COLUMN_TYPE
, COLUMN_PROPERTIES
, COLUMN_DEFAULT_VALUE
, and ADD_COLUMN_EXPRESSION
in options
. If action
is TTL
, it
would be the number of minutes for the new TTL. If action
is REFRESH
, this field would be blank.options
- Optional parameters.
ACTION
COLUMN_NAME
TABLE_NAME
COLUMN_DEFAULT_VALUE
: When adding a column, set
a default value for existing records. For
nullable columns, the default value will be
null, regardless of data type.
COLUMN_PROPERTIES
: When adding or changing a
column, set the column properties (strings,
separated by a comma: data, store_only,
text_search, char8, int8 etc).
COLUMN_TYPE
: When adding or changing a column,
set the column type (strings, separated by a
comma: int, double, string, null etc).
COMPRESSION_TYPE
: No longer supported; option
will be ignored.
Supported values:
The default value is SNAPPY
.
COPY_VALUES_FROM_COLUMN
: [DEPRECATED--please
use ADD_COLUMN_EXPRESSION
instead.]
RENAME_COLUMN
: When changing a column, specify
new column name.
VALIDATE_CHANGE_COLUMN
: When changing a column,
validate the change before applying it (or not).
Supported values:
TRUE
: Validate all values. A value too
large (or too long) for the new type
will prevent any change.
FALSE
: When a value is too large or
long, it will be truncated.
TRUE
.
UPDATE_LAST_ACCESS_TIME
: Indicates whether the
time-to-live (TTL) expiration
countdown timer should be reset to the table's
TTL.
Supported values:
TRUE
: Reset the expiration countdown
timer to the table's configured TTL.
FALSE
: Don't reset the timer;
expiration countdown will continue from
where it is, as if the table had not
been accessed.
TRUE
.
ADD_COLUMN_EXPRESSION
: When adding a column, an
optional expression to use for the new column's
values. Any valid expression may be used,
including one containing references to existing
columns in the same table.
STRATEGY_DEFINITION
: Optional parameter for
specifying the tier strategy for the table
and its columns when action
is SET_STRATEGY_DEFINITION
, replacing the existing
tier strategy in its entirety.
INDEX_TYPE
: Type of index to create, when
action
is CREATE_INDEX
; to refresh, when action
is REFRESH_INDEX
; or to delete, when action
is DELETE_INDEX
.
Supported values:
COLUMN
: Create or delete a column (attribute)
index.
LOW_CARDINALITY
: Create a low-cardinality column
(attribute) index.
CHUNK_SKIP
: Create or delete a chunk skip index.
GEOSPATIAL
: Create or delete a geospatial index
CAGRA
: Create or delete a CAGRA index on a vector column
HNSW
: Create or delete an HNSW index on a vector column
COLUMN
.
INDEX_OPTIONS
: Options to use when creating an
index, in the format "key: value [, key: value
[, ...]]". Valid options vary by index type.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterTableColumnsResponse alterTableColumns(AlterTableColumnsRequest request) throws GPUdbException
Create or delete an index on a particular column. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Manage a table's columns--a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterTableColumnsResponse alterTableColumns(String tableName, List<Map<String,String>> columnAlterations, Map<String,String> options) throws GPUdbException
Create or delete an index on a particular column. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Manage a table's columns--a column can be added, removed, or have its type and properties modified, including whether it is dictionary encoded or not.
tableName
- Table on which the operation will be performed. Must
be an existing table or view, in
[schema_name.]table_name format, using standard name resolution rules.columnAlterations
- List of alter table add/delete/change column
requests - all for the same table. Each
request is a map that includes 'column_name',
'action' and the options specific for the
action. Note that the same options as in alter
table requests but in the same map as the
column name and the action. For example:
[{'column_name':'col_1','action':'change_column','rename_column':'col_2'},{'column_name':'col_1','action':'add_column',
'type':'int','default_value':'1'}]options
- Optional parameters.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterTableMetadataResponse alterTableMetadata(AlterTableMetadataRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AlterTableMetadataResponse alterTableMetadata(List<String> tableNames, Map<String,String> metadataMap, Map<String,String> options) throws GPUdbException
tableNames
- Names of the tables whose metadata will be updated,
in [schema_name.]table_name format, using standard name resolution rules. All
specified tables must exist, or an error will be
returned.metadataMap
- A map which contains the metadata of the tables that
are to be updated. Note that only one map is
provided for all the tables; so the change will be
applied to every table. If the provided map is
empty, then all existing metadata for the table(s)
will be cleared.options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public AlterTableMonitorResponse alterTableMonitor(AlterTableMonitorRequest request) throws GPUdbException
createTableMonitor
.request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterTableMonitorResponse alterTableMonitor(String topicId, Map<String,String> monitorUpdatesMap, Map<String,String> options) throws GPUdbException
createTableMonitor
.topicId
- The topic ID returned by createTableMonitor
.monitorUpdatesMap
- Map containing the properties of the table
monitor to be updated. Error if empty.
SCHEMA_NAME
: Updates the schema name.
If SCHEMA_NAME
doesn't exist, an error
will be thrown. If SCHEMA_NAME
is empty, then the user's
default schema will be used.
options
- Optional parameters.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterTierResponse alterTier(AlterTierRequest request) throws GPUdbException
To disable watermark-based eviction, set both HIGH_WATERMARK
and LOW_WATERMARK
to 100.
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public AlterTierResponse alterTier(String name, Map<String,String> options) throws GPUdbException
To disable watermark-based eviction, set both HIGH_WATERMARK
and LOW_WATERMARK
to 100.
name
- Name of the tier to be altered. Must be an existing tier
group name.options
- Optional parameters.
CAPACITY
: Maximum size in bytes this tier may
hold at once.
HIGH_WATERMARK
: Threshold of usage of this
tier's resource that once exceeded, will trigger
watermark-based eviction from this tier. The
minimum allowed value is '0'. The maximum
allowed value is '100'.
LOW_WATERMARK
: Threshold of resource usage that
once fallen below after crossing the HIGH_WATERMARK
, will cease watermark-based
eviction from this tier. The minimum allowed
value is '0'. The maximum allowed value is
'100'.
WAIT_TIMEOUT
: Timeout in seconds for reading
from or writing to this resource. Applies to
cold storage tiers only.
PERSIST
: If TRUE
the system configuration will be written
to disk upon successful application of this
request. This will commit the changes from this
request and any additional in-memory
modifications.
Supported values:
The default value is TRUE
.
RANK
: Apply the requested change only to a
specific rank. The minimum allowed value is '0'.
The maximum allowed value is '10000'.
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public AlterUserResponse alterUser(AlterUserRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public AlterUserResponse alterUser(String name, String action, String value, Map<String,String> options) throws GPUdbException
name
- Name of the user to be altered. Must be an existing user.action
- Modification operation to be applied to the user.
Supported values:
SET_ACTIVATED
: Is the user allowed to login.
TRUE
: User may login
FALSE
: User may not login
SET_COMMENT
: Sets the comment for an internal
user.
SET_DEFAULT_SCHEMA
: Set the default_schema for
an internal user. An empty string means the user
will have no default schema.
SET_PASSWORD
: Sets the password of the user. The
user must be an internal user.
SET_RESOURCE_GROUP
: Sets the resource group for
an internal user. The resource group must exist,
otherwise, an empty string assigns the user to
the default resource group.
value
- The value of the modification, depending on action
.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public AlterVideoResponse alterVideo(AlterVideoRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterVideoResponse alterVideo(String path, Map<String,String> options) throws GPUdbException
path
- Fully-qualified KiFS path to the video to be altered.options
- Optional parameters.
The default value is an empty Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AlterWalResponse alterWal(AlterWalRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public AlterWalResponse alterWal(List<String> tableNames, Map<String,String> options) throws GPUdbException
tableNames
- List of tables to modify. An asterisk changes the
system settings.options
- Optional parameters.
MAX_SEGMENT_SIZE
: Maximum size of an individual
segment file
SEGMENT_COUNT
: Approximate number of segment
files to split the wal across. Must be at least
two.
SYNC_POLICY
: Maximum size of an individual
segment file.
Supported values:
NONE
: Disables the wal
BACKGROUND
: Wal entries are
periodically written instead of
immediately after each operation
FLUSH
: Protects entries in the event of
a database crash
FSYNC
: Protects entries in the event of
an OS crash
FLUSH_FREQUENCY
: Specifies how frequently wal
entries are written with background sync. This
is a global setting and can only be used with
the system {options.table_names} specifier '*'.
CHECKSUM
: If TRUE
each entry will be checked against a
protective checksum.
Supported values:
The default value is TRUE
.
OVERRIDE_NON_DEFAULT
: If TRUE
tables with unique wal settings will be
overridden when applying a system level change.
Supported values:
The default value is FALSE
.
RESTORE_SYSTEM_SETTINGS
: If TRUE
tables with unique wal settings will be
reverted to the current global settings. Cannot
be used in conjunction with any other option.
Supported values:
The default value is FALSE
.
PERSIST
: If TRUE
and a system-level change was requested,
the system configuration will be written to disk
upon successful application of this request.
This will commit the changes from this request
and any additional in-memory modifications.
Supported values:
The default value is TRUE
.
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public AppendRecordsResponse appendRecords(AppendRecordsRequest request) throws GPUdbException
sourceTableName
) to a particular target table (specified by tableName
). The
field map (specified by fieldMap
) holds
the user specified map of target table column names with their mapped
source column names.request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public AppendRecordsResponse appendRecords(String tableName, String sourceTableName, Map<String,String> fieldMap, Map<String,String> options) throws GPUdbException
sourceTableName
) to a particular target table (specified by tableName
). The field map (specified by fieldMap
) holds the
user specified map of target table column names with their mapped source
column names.tableName
- The table name for the records to be appended, in
[schema_name.]table_name format, using standard name resolution rules. Must be an
existing table.sourceTableName
- The source table name to get records from, in
[schema_name.]table_name format, using standard
name resolution rules. Must
be an existing table name.fieldMap
- Contains the mapping of column names from the target
table (specified by tableName
) as the keys, and
corresponding column names or expressions (e.g.,
'col_name+1') from the source table (specified by
sourceTableName
). Must be existing column names
in source table and target table, and their types must
be matched. For details on using expressions, see Expressions.options
- Optional parameters.
OFFSET
: A positive integer indicating the
number of initial results to skip from sourceTableName
. Default is 0. The minimum
allowed value is 0. The maximum allowed value is
MAX_INT. The default value is '0'.
LIMIT
: A positive integer indicating the
maximum number of results to be returned from
sourceTableName
. Or END_OF_SET (-9999)
to indicate that the max number of results
should be returned. The default value is
'-9999'.
EXPRESSION
: Optional filter expression to apply
to the sourceTableName
. The default
value is ''.
ORDER_BY
: Comma-separated list of the columns
to be sorted by from source table (specified by
sourceTableName
), e.g., 'timestamp asc,
x desc'. The ORDER_BY
columns do not have to be present in
fieldMap
. The default value is ''.
UPDATE_ON_EXISTING_PK
: Specifies the record
collision policy for inserting source table
records (specified by sourceTableName
)
into a target table (specified by tableName
) with a primary key. If set to TRUE
, any existing table record with primary
key values that match those of a source table
record being inserted will be replaced by that
new record (the new data will be "upserted"). If
set to FALSE
, any existing table record with primary
key values that match those of a source table
record being inserted will remain unchanged,
while the source record will be rejected and an
error handled as determined by IGNORE_EXISTING_PK
. If the specified table
does not have a primary key, then this option
has no effect.
Supported values:
TRUE
: Upsert new records when primary
keys match existing records
FALSE
: Reject new records when primary
keys match existing records
FALSE
.
IGNORE_EXISTING_PK
: Specifies the record
collision error-suppression policy for inserting
source table records (specified by sourceTableName
) into a target table (specified
by tableName
) with a primary key, only used when
not in upsert mode (upsert mode is disabled when
UPDATE_ON_EXISTING_PK
is FALSE
). If set to TRUE
, any source table record being inserted
that is rejected for having primary key values
that match those of an existing target table
record will be ignored with no error generated.
If FALSE
, the rejection of any source table record
for having primary key values matching an
existing target table record will result in an
error being raised. If the specified table does
not have a primary key or if upsert mode is in
effect (UPDATE_ON_EXISTING_PK
is TRUE
), then this option has no effect.
Supported values:
TRUE
: Ignore source table records whose
primary key values collide with those of
target table records
FALSE
: Raise an error for any source
table record whose primary key values
collide with those of a target table
record
FALSE
.
TRUNCATE_STRINGS
: If set to TRUE
, it allows inserting longer strings into
smaller charN string columns by truncating the
longer strings to fit.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ClearStatisticsResponse clearStatistics(ClearStatisticsRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ClearStatisticsResponse clearStatistics(String tableName, String columnName, Map<String,String> options) throws GPUdbException
tableName
- Name of a table, in [schema_name.]table_name format,
using standard name resolution rules. Must be an
existing table. The default value is ''.columnName
- Name of the column in tableName
for which to
clear statistics. The column must be from an existing
table. An empty string clears statistics for all
columns in the table. The default value is ''.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ClearTableResponse clearTable(ClearTableRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ClearTableResponse clearTable(String tableName, String authorization, Map<String,String> options) throws GPUdbException
tableName
- Name of the table to be cleared, in
[schema_name.]table_name format, using standard name resolution rules. Must be an
existing table. Empty string clears all available
tables, though this behavior is be prevented by
default via gpudb.conf parameter 'disable_clear_all'.
The default value is ''.authorization
- No longer used. User can pass an empty string. The
default value is ''.options
- Optional parameters.
NO_ERROR_IF_NOT_EXISTS
: If TRUE
and if the table specified in tableName
does not exist no error is returned.
If FALSE
and if the table specified in tableName
does not exist then an error is
returned.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ClearTableMonitorResponse clearTableMonitor(ClearTableMonitorRequest request) throws GPUdbException
createTableMonitor
.request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ClearTableMonitorResponse clearTableMonitor(String topicId, Map<String,String> options) throws GPUdbException
createTableMonitor
.topicId
- The topic ID returned by createTableMonitor
.options
- Optional parameters.
KEEP_AUTOGENERATED_SINK
: If TRUE
, the auto-generated datasink associated with this
monitor, if there is one, will be retained for
further use. If FALSE
, then the auto-generated sink will be
dropped if there are no other monitors
referencing it.
Supported values:
The default value is FALSE
.
CLEAR_ALL_REFERENCES
: If TRUE
, all references that share the same topicId
will be cleared.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ClearTriggerResponse clearTrigger(ClearTriggerRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ClearTriggerResponse clearTrigger(String triggerId, Map<String,String> options) throws GPUdbException
triggerId
- ID for the trigger to be deactivated.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CollectStatisticsResponse collectStatistics(CollectStatisticsRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CollectStatisticsResponse collectStatistics(String tableName, List<String> columnNames, Map<String,String> options) throws GPUdbException
tableName
- Name of a table, in [schema_name.]table_name format,
using standard name resolution rules. Must be an
existing table.columnNames
- List of one or more column names in tableName
for which to collect statistics
(cardinality, mean value, etc.).options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateContainerRegistryResponse createContainerRegistry(CreateContainerRegistryRequest request) throws GPUdbException
GPUdbException
public CreateContainerRegistryResponse createContainerRegistry(String registryName, String uri, String credential, Map<String,String> options) throws GPUdbException
GPUdbException
public CreateCredentialResponse createCredential(CreateCredentialRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateCredentialResponse createCredential(String credentialName, String type, String identity, String secret, Map<String,String> options) throws GPUdbException
credentialName
- Name of the credential to be created. Must
contain only letters, digits, and underscores,
and cannot begin with a digit. Must not match an
existing credential name.type
- Type of the credential to be created.
Supported values:
identity
- User of the credential to be created.secret
- Password of the credential to be created.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateDatasinkResponse createDatasink(CreateDatasinkRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateDatasinkResponse createDatasink(String name, String destination, Map<String,String> options) throws GPUdbException
name
- Name of the data sink to be created.destination
- Destination for the output data in format
'storage_provider_type://path[:port]'. Supported
storage provider types are 'azure', 'gcs', 'hdfs',
'http', 'https', 'jdbc', 'kafka', and 's3'.options
- Optional parameters.
CONNECTION_TIMEOUT
: Timeout in seconds for
connecting to this data sink
WAIT_TIMEOUT
: Timeout in seconds for waiting
for a response from this data sink
CREDENTIAL
: Name of the credential object to be used
in this data sink
S3_BUCKET_NAME
: Name of the Amazon S3 bucket to
use as the data sink
S3_REGION
: Name of the Amazon S3 region where
the given bucket is located
S3_VERIFY_SSL
: Whether to verify SSL
connections.
Supported values:
TRUE
: Connect with SSL verification
FALSE
: Connect without verifying the
SSL connection; for testing purposes,
bypassing TLS errors, self-signed
certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether to use
virtual addressing when referencing the Amazon
S3 sink.
Supported values:
TRUE
: The requests URI should be
specified in virtual-hosted-style format
where the bucket name is part of the
domain name in the URL.
FALSE
: Use path-style URI for requests.
TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role ARN which has
required S3 permissions that can be assumed for
the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer
encryption algorithm used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption
key to encrypt or decrypt data
S3_ENCRYPTION_TYPE
: Server side encryption type
S3_KMS_KEY_ID
: KMS key
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file
location for the given HDFS user. This may be a
KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token for the
given HDFS user
HDFS_USE_KERBEROS
: Use kerberos authentication
for the given HDFS cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure
storage account to use as the data sink, this is
valid only if tenant_id is specified
AZURE_CONTAINER_NAME
: Name of the Azure storage
container to use as the data sink
AZURE_TENANT_ID
: Active Directory tenant ID (or
directory ID)
AZURE_SAS_TOKEN
: Shared access signature token
for Azure storage account to use as the data
sink
AZURE_OAUTH_TOKEN
: Oauth token to access given
storage container
GCS_BUCKET_NAME
: Name of the Google Cloud
Storage bucket to use as the data sink
GCS_PROJECT_ID
: Name of the Google Cloud
project to use as the data sink
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service
account keys to use for authenticating the data
sink
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file
location
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC driver
class
KAFKA_TOPIC_NAME
: Name of the Kafka topic to
publish to if destination
is a Kafka
broker
MAX_BATCH_SIZE
: Maximum number of records per
notification message. The default value is '1'.
MAX_MESSAGE_SIZE
: Maximum size in bytes of each
notification message. The default value is
'1000000'.
JSON_FORMAT
: The desired format of JSON encoded
notifications message.
Supported values:
The default value is FLAT
.
USE_MANAGED_CREDENTIALS
: When no credentials
are supplied, we use anonymous access by
default. If this is set, we will use cloud
provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to datasink if
true, otherwise use http.
Supported values:
The default value is TRUE
.
SKIP_VALIDATION
: Bypass validation of
connection to this data sink.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateDatasourceResponse createDatasource(CreateDatasourceRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateDatasourceResponse createDatasource(String name, String location, String userName, String password, Map<String,String> options) throws GPUdbException
name
- Name of the data source to be created.location
- Location of the remote storage in
'storage_provider_type://[storage_path[:storage_port]]'
format. Supported storage provider types are 'azure',
'gcs', 'hdfs', 'jdbc', 'kafka', 'confluent', and 's3'.userName
- Name of the remote system user; may be an empty stringpassword
- Password for the remote system user; may be an empty
stringoptions
- Optional parameters.
SKIP_VALIDATION
: Bypass validation of
connection to remote source.
Supported values:
The default value is FALSE
.
CONNECTION_TIMEOUT
: Timeout in seconds for
connecting to this storage provider
WAIT_TIMEOUT
: Timeout in seconds for reading
from this storage provider
CREDENTIAL
: Name of the credential object to be used
in data source
S3_BUCKET_NAME
: Name of the Amazon S3 bucket to
use as the data source
S3_REGION
: Name of the Amazon S3 region where
the given bucket is located
S3_VERIFY_SSL
: Whether to verify SSL
connections.
Supported values:
TRUE
: Connect with SSL verification
FALSE
: Connect without verifying the
SSL connection; for testing purposes,
bypassing TLS errors, self-signed
certificates, etc.
TRUE
.
S3_USE_VIRTUAL_ADDRESSING
: Whether to use
virtual addressing when referencing the Amazon
S3 source.
Supported values:
TRUE
: The requests URI should be
specified in virtual-hosted-style format
where the bucket name is part of the
domain name in the URL.
FALSE
: Use path-style URI for requests.
TRUE
.
S3_AWS_ROLE_ARN
: Amazon IAM Role ARN which has
required S3 permissions that can be assumed for
the given S3 IAM user
S3_ENCRYPTION_CUSTOMER_ALGORITHM
: Customer
encryption algorithm used encrypting data
S3_ENCRYPTION_CUSTOMER_KEY
: Customer encryption
key to encrypt or decrypt data
HDFS_KERBEROS_KEYTAB
: Kerberos keytab file
location for the given HDFS user. This may be a
KIFS file.
HDFS_DELEGATION_TOKEN
: Delegation token for the
given HDFS user
HDFS_USE_KERBEROS
: Use kerberos authentication
for the given HDFS cluster.
Supported values:
The default value is FALSE
.
AZURE_STORAGE_ACCOUNT_NAME
: Name of the Azure
storage account to use as the data source, this
is valid only if tenant_id is specified
AZURE_CONTAINER_NAME
: Name of the Azure storage
container to use as the data source
AZURE_TENANT_ID
: Active Directory tenant ID (or
directory ID)
AZURE_SAS_TOKEN
: Shared access signature token
for Azure storage account to use as the data
source
AZURE_OAUTH_TOKEN
: OAuth token to access given
storage container
GCS_BUCKET_NAME
: Name of the Google Cloud
Storage bucket to use as the data source
GCS_PROJECT_ID
: Name of the Google Cloud
project to use as the data source
GCS_SERVICE_ACCOUNT_KEYS
: Google Cloud service
account keys to use for authenticating the data
source
IS_STREAM
: To load from Azure/GCS/S3 as a
stream continuously.
Supported values:
The default value is FALSE
.
KAFKA_TOPIC_NAME
: Name of the Kafka topic to
use as the data source
JDBC_DRIVER_JAR_PATH
: JDBC driver jar file
location. This may be a KIFS file.
JDBC_DRIVER_CLASS_NAME
: Name of the JDBC driver
class
ANONYMOUS
: Use anonymous connection to storage
provider--DEPRECATED: this is now the default.
Specify use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is TRUE
.
USE_MANAGED_CREDENTIALS
: When no credentials
are supplied, we use anonymous access by
default. If this is set, we will use cloud
provider user settings.
Supported values:
The default value is FALSE
.
USE_HTTPS
: Use https to connect to datasource
if true, otherwise use http.
Supported values:
The default value is TRUE
.
SCHEMA_REGISTRY_LOCATION
: Location of Confluent
Schema Registry in
'[storage_path[:storage_port]]' format.
SCHEMA_REGISTRY_CREDENTIAL
: Confluent Schema
Registry credential object name.
SCHEMA_REGISTRY_PORT
: Confluent Schema Registry
port (optional).
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateDeltaTableResponse createDeltaTable(CreateDeltaTableRequest request) throws GPUdbException
GPUdbException
public CreateDeltaTableResponse createDeltaTable(String deltaTableName, String tableName, Map<String,String> options) throws GPUdbException
GPUdbException
public CreateDirectoryResponse createDirectory(CreateDirectoryRequest request) throws GPUdbException
uploadFiles
.request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateDirectoryResponse createDirectory(String directoryName, Map<String,String> options) throws GPUdbException
uploadFiles
.directoryName
- Name of the directory in KiFS to be created.options
- Optional parameters.
CREATE_HOME_DIRECTORY
: When set, a home
directory is created for the user name provided
in the value. The directoryName
must be
an empty string in this case. The user must
exist.
DATA_LIMIT
: The maximum capacity, in bytes, to
apply to the created directory. Set to -1 to
indicate no upper limit. If empty, the system
default limit is applied.
NO_ERROR_IF_EXISTS
: If TRUE
, does not return an error if the directory
already exists.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateEnvironmentResponse createEnvironment(CreateEnvironmentRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateEnvironmentResponse createEnvironment(String environmentName, Map<String,String> options) throws GPUdbException
environmentName
- Name of the environment to be created.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateGraphResponse createGraph(CreateGraphRequest request) throws GPUdbException
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateGraphResponse createGraph(String graphName, boolean directedGraph, List<String> nodes, List<String> edges, List<String> weights, List<String> restrictions, Map<String,String> options) throws GPUdbException
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.
graphName
- Name of the graph resource to generate.directedGraph
- If set to TRUE
, the graph will be directed. If set to
FALSE
, the graph will not be directed. Consult Directed Graphs for more
details.
Supported values:
true
false
true
.nodes
- Nodes represent fundamental topological units of a graph.
Nodes must be specified using identifiers; identifiers are grouped as
combinations. Identifiers can be used
with existing column names, e.g., 'table.column AS
NODE_ID', expressions, e.g., 'ST_MAKEPOINT(column1,
column2) AS NODE_WKTPOINT', or constant values, e.g., '{9,
10, 11} AS NODE_ID'. If using constant values in an
identifier combination, the number of values specified
must match across the combination.edges
- Edges represent the required fundamental topological unit
of a graph that typically connect nodes. Edges must be
specified using identifiers; identifiers are grouped as
combinations. Identifiers can be used
with existing column names, e.g., 'table.column AS
EDGE_ID', expressions, e.g., 'SUBSTR(column, 1, 6) AS
EDGE_NODE1_NAME', or constant values, e.g., "{'family',
'coworker'} AS EDGE_LABEL". If using constant values in an
identifier combination, the number of values specified
must match across the combination.weights
- Weights represent a method of informing the graph solver
of the cost of including a given edge in a solution.
Weights must be specified using identifiers; identifiers are grouped
as combinations. Identifiers can be used
with existing column names, e.g., 'table.column AS
WEIGHTS_EDGE_ID', expressions, e.g., 'ST_LENGTH(wkt) AS
WEIGHTS_VALUESPECIFIED', or constant values, e.g., '{4,
15} AS WEIGHTS_VALUESPECIFIED'. If using constant values
in an identifier combination, the number of values
specified must match across the combination.restrictions
- Restrictions represent a method of informing the
graph solver which edges and/or nodes should be
ignored for the solution. Restrictions must be
specified using identifiers; identifiers are
grouped as combinations. Identifiers can be
used with existing column names, e.g.,
'table.column AS RESTRICTIONS_EDGE_ID',
expressions, e.g., 'column/2 AS
RESTRICTIONS_VALUECOMPARED', or constant values,
e.g., '{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'.
If using constant values in an identifier
combination, the number of values specified must
match across the combination.options
- Optional parameters.
MERGE_TOLERANCE
: If node geospatial positions
are input (e.g., WKTPOINT, X, Y), determines the
minimum separation allowed between unique nodes.
If nodes are within the tolerance of each other,
they will be merged as a single node. The
default value is '1.0E-5'.
RECREATE
: If set to TRUE
and the graph (using graphName
)
already exists, the graph is deleted and
recreated.
Supported values:
The default value is FALSE
.
SAVE_PERSIST
: If set to TRUE
, the graph will be saved in the persist
directory (see the config reference for more
information). If set to FALSE
, the graph will be removed when the graph
server is shutdown.
Supported values:
The default value is FALSE
.
ADD_TABLE_MONITOR
: Adds a table monitor to
every table used in the creation of the graph;
this table monitor will trigger the graph to
update dynamically upon inserts to the source
table(s). Note that upon database restart, if
SAVE_PERSIST
is also set to TRUE
, the graph will be fully reconstructed and
the table monitors will be reattached. For more
details on table monitors, see createTableMonitor
.
Supported values:
The default value is FALSE
.
GRAPH_TABLE
: If specified, the created graph is
also created as a table with the given name, in
[schema_name.]table_name format, using standard
name resolution rules and
meeting table naming criteria. The
table will have the following identifier
columns: 'EDGE_ID', 'EDGE_NODE1_ID',
'EDGE_NODE2_ID'. If left blank, no table is
created. The default value is ''.
ADD_TURNS
: Adds dummy 'pillowed' edges around
intersection nodes where there are more than
three edges so that additional weight penalties
can be imposed by the solve endpoints.
(increases the total number of edges).
Supported values:
The default value is FALSE
.
IS_PARTITIONED
:
Supported values:
The default value is FALSE
.
SERVER_ID
: Indicates which graph server(s) to
send the request to. Default is to send to the
server with the most available memory.
USE_RTREE
: Use an range tree structure to
accelerate and improve the accuracy of snapping,
especially to edges.
Supported values:
The default value is TRUE
.
LABEL_DELIMITER
: If provided the label string
will be split according to this delimiter and
each sub-string will be applied as a separate
label onto the specified edge. The default value
is ''.
ALLOW_MULTIPLE_EDGES
: Multigraph choice;
allowing multiple edges with the same node pairs
if set to true, otherwise, new edges with
existing same node pairs will not be inserted.
Supported values:
The default value is TRUE
.
EMBEDDING_TABLE
: If table exists (should be
generated by the match/graph match_embedding
solver), the vector embeddings for the newly
inserted nodes will be appended into this table.
The default value is ''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateJobResponse createJob(CreateJobRequest request) throws GPUdbException
getJob
.request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public CreateJobResponse createJob(String endpoint, String requestEncoding, ByteBuffer data, String dataStr, Map<String,String> options) throws GPUdbException
getJob
.endpoint
- Indicates which endpoint to execute, e.g.
'/alter/table'.requestEncoding
- The encoding of the request payload for the job.
Supported values:
The default value is BINARY
.data
- Binary-encoded payload for the job to be run
asynchronously. The payload must contain the relevant
input parameters for the endpoint indicated in endpoint
. Please see the documentation for the
appropriate endpoint to see what values must (or can) be
specified. If this parameter is used, then requestEncoding
must be BINARY
or SNAPPY
.dataStr
- JSON-encoded payload for the job to be run
asynchronously. The payload must contain the relevant
input parameters for the endpoint indicated in endpoint
. Please see the documentation for the
appropriate endpoint to see what values must (or can) be
specified. If this parameter is used, then requestEncoding
must be JSON
.options
- Optional parameters.
REMOVE_JOB_ON_COMPLETE
:
Supported values:
JOB_TAG
: Tag to use for submitted job. The same
tag could be used on backup cluster to retrieve
response for the job. Tags can use letter,
numbers, '_' and '-'
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public CreateJoinTableResponse createJoinTable(CreateJoinTableRequest request) throws GPUdbException
For join details and examples see: Joins. For limitations, see Join Limitations and Cautions.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateJoinTableResponse createJoinTable(String joinTableName, List<String> tableNames, List<String> columnNames, List<String> expressions, Map<String,String> options) throws GPUdbException
For join details and examples see: Joins. For limitations, see Join Limitations and Cautions.
joinTableName
- Name of the join table to be created, in
[schema_name.]table_name format, using standard name resolution rules and
meeting table naming criteria.tableNames
- The list of table names composing the join, each in
[schema_name.]table_name format, using standard name resolution rules. Corresponds
to a SQL statement FROM clause.columnNames
- List of member table columns or column expressions
to be included in the join. Columns can be prefixed
with 'table_id.column_name', where 'table_id' is the
table name or alias. Columns can be aliased via the
syntax 'column_name as alias'. Wild cards '*' can be
used to include all columns across member tables or
'table_id.*' for all of a single table's columns.
Columns and column expressions composing the join
must be uniquely named or aliased--therefore, the
'*' wild card cannot be used if column names aren't
unique across all tables.expressions
- An optional list of expressions to combine and
filter the joined tables. Corresponds to a SQL
statement WHERE clause. For details see: expressions. The default value is
an empty List
.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of joinTableName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_JOIN_TABLE_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the join as part of
joinTableName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the join. If the schema is
non-existent, it will be automatically created.
The default value is ''.
MAX_QUERY_DIMENSIONS
: No longer used.
OPTIMIZE_LOOKUPS
: Use more memory to speed up
the joining of tables.
Supported values:
The default value is FALSE
.
STRATEGY_DEFINITION
: The tier strategy for the table
and its columns.
TTL
: Sets the TTL of the join table
specified in joinTableName
.
VIEW_ID
: view this projection is part of. The
default value is ''.
NO_COUNT
: Return a count of 0 for the join
table for logging and for showTable
;
optimization needed for large overlapped
equi-join stencils. The default value is
'false'.
CHUNK_SIZE
: Maximum number of records per
joined-chunk for this table. Defaults to the
gpudb.conf file chunk size
ENABLE_VIRTUAL_CHUNKING
: Collect chunks with
accumulated size less than chunk_size into a
single chunk. The default value is 'false'.
ENABLE_PK_EQUI_JOIN
: Use equi-join to do
primary key joins rather than using
primary-key-index
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateMaterializedViewResponse createMaterializedView(CreateMaterializedViewRequest request) throws GPUdbException
For materialized view details and examples, see Materialized Views.
The response contains viewId
,
which is used to tag each subsequent operation (projection, union,
aggregation, filter, or join) that will compose the view.
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateMaterializedViewResponse createMaterializedView(String tableName, Map<String,String> options) throws GPUdbException
For materialized view details and examples, see Materialized Views.
The response contains viewId
,
which is used to tag each subsequent operation (projection, union,
aggregation, filter, or join) that will compose the view.
tableName
- Name of the table to be created that is the top-level
table of the materialized view, in
[schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.options
- Optional parameters.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the materialized view
as part of tableName
and use createSchema
to
create the schema if non-existent] Name of a
schema which is to contain the newly created
view. If the schema provided is non-existent, it
will be automatically created.
EXECUTE_AS
: User name to use to run the refresh
job
BUILD_MATERIALIZED_VIEW_POLICY
: Sets startup
materialized view rebuild scheme.
Supported values:
ALWAYS
: Rebuild as many materialized
views as possible before accepting
requests.
LAZY
: Rebuild the necessary
materialized views at start, and load
the remainder lazily.
ON_DEMAND
: Rebuild materialized views
as requests use them.
SYSTEM
: Rebuild materialized views
using the system-configured default.
SYSTEM
.
PERSIST
: If TRUE
, then the materialized view specified in
tableName
will be persisted and will not
expire unless a TTL
is specified. If FALSE
, then the materialized view will be an
in-memory table and will expire unless a TTL
is specified otherwise.
Supported values:
The default value is FALSE
.
REFRESH_SPAN
: Sets the future time-offset(in
seconds) at which periodic refresh stops
REFRESH_STOP_TIME
: When REFRESH_METHOD
is PERIODIC
, specifies the time at which a
periodic refresh is stopped. Value is a
datetime string with format 'YYYY-MM-DD
HH:MM:SS'.
REFRESH_METHOD
: Method by which the join can be
refreshed when the data in underlying member
tables have changed.
Supported values:
MANUAL
: Refresh only occurs when
manually requested by calling alterTable
with an 'action' of
'refresh'
ON_QUERY
: Refresh any time the view is
queried.
ON_CHANGE
: If possible, incrementally
refresh (refresh just those records
added) whenever an insert, update,
delete or refresh of input table is
done. A full refresh is done if an
incremental refresh is not possible.
PERIODIC
: Refresh table periodically at
rate specified by REFRESH_PERIOD
MANUAL
.
REFRESH_PERIOD
: When REFRESH_METHOD
is PERIODIC
, specifies the period in seconds at
which refresh occurs
REFRESH_START_TIME
: When REFRESH_METHOD
is PERIODIC
, specifies the first time at which a
refresh is to be done. Value is a datetime
string with format 'YYYY-MM-DD HH:MM:SS'.
TTL
: Sets the TTL of the table specified in
tableName
.
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateProcResponse createProc(CreateProcRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateProcResponse createProc(String procName, String executionMode, Map<String,ByteBuffer> files, String command, List<String> args, Map<String,String> options) throws GPUdbException
procName
- Name of the proc to be created. Must not be the name of
a currently existing proc.executionMode
- The execution mode of the proc.
Supported values:
DISTRIBUTED
: Input table data will be
divided into data segments that are
distributed across all nodes in the
cluster, and the proc command will be
invoked once per data segment in parallel.
Output table data from each invocation
will be saved to the same node as the
corresponding input data.
NONDISTRIBUTED
: The proc command will be
invoked only once per execution, and will
not have direct access to any tables named
as input or output table parameters in the
call to executeProc
. It will, however, be able
to access the database using native API
calls.
DISTRIBUTED
.files
- A map of the files that make up the proc. The keys of the
map are file names, and the values are the binary contents
of the files. The file names may include subdirectory
names (e.g. 'subdir/file') but must not resolve to a
directory above the root for the proc. Files may be
loaded from existing files in KiFS. Those file names
should be prefixed with the uri kifs:// and the values in
the map should be empty. The default value is an empty
Map
.command
- The command (excluding arguments) that will be invoked
when the proc is executed. It will be invoked from the
directory containing the proc files
and may be
any command that can be resolved from that directory. It
need not refer to a file actually in that directory; for
example, it could be 'java' if the proc is a Java
application; however, any necessary external programs
must be preinstalled on every database node. If the
command refers to a file in that directory, it must be
preceded with './' as per Linux convention. If not
specified, and exactly one file is provided in files
, that file will be invoked. The default value is
''.args
- An array of command-line arguments that will be passed to
command
when the proc is executed. The default
value is an empty List
.options
- Optional parameters.
MAX_CONCURRENCY_PER_NODE
: The maximum number of
concurrent instances of the proc that will be
executed per node. 0 allows unlimited
concurrency. The default value is '0'.
SET_ENVIRONMENT
: A python environment to use
when executing the proc. Must be an existing
environment, else an error will be returned. The
default value is ''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateProjectionResponse createProjection(CreateProjectionRequest request) throws GPUdbException
For projection details and examples, see Projections. For limitations, see Projection Limitations and Cautions.
Window
functions, which can perform operations like moving averages, are
available through this endpoint as well as getRecordsByColumn
.
A projection can be created with a different shard
key than the source table. By specifying SHARD_KEY
,
the projection will be sharded according to the specified columns,
regardless of how the source table is sharded. The source table can
even be unsharded or replicated.
If tableName
is empty, selection is performed against a single-row virtual
table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateProjectionResponse createProjection(String tableName, String projectionName, List<String> columnNames, Map<String,String> options) throws GPUdbException
For projection details and examples, see Projections. For limitations, see Projection Limitations and Cautions.
Window
functions, which can perform operations like moving averages, are
available through this endpoint as well as getRecordsByColumn
.
A projection can be created with a different shard
key than the source table. By specifying SHARD_KEY
,
the projection will be sharded according to the specified columns,
regardless of how the source table is sharded. The source table can
even be unsharded or replicated.
If tableName
is empty, selection is performed against a
single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
tableName
- Name of the existing table on which the projection is
to be applied, in [schema_name.]table_name format,
using standard name resolution rules. An empty
table name creates a projection from a single-row
virtual table, where columns specified should be
constants or constant expressions.projectionName
- Name of the projection to be created, in
[schema_name.]table_name format, using standard
name resolution rules and
meeting table naming criteria.columnNames
- List of columns from tableName
to be
included in the projection. Can include derived
columns. Can be specified as aliased via the syntax
'column_name as alias'.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of projectionName
. If PERSIST
is FALSE
(or unspecified), then this is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_PROJECTION_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the projection as part
of projectionName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the projection. If the schema is
non-existent, it will be automatically created.
The default value is ''.
EXPRESSION
: An optional filter expression to be applied to
the source table prior to the projection. The
default value is ''.
IS_REPLICATED
: If TRUE
then the projection will be replicated
even if the source table is not.
Supported values:
The default value is FALSE
.
OFFSET
: The number of initial results to skip
(this can be useful for paging through the
results). The default value is '0'.
LIMIT
: The number of records to keep. The
default value is '-9999'.
ORDER_BY
: Comma-separated list of the columns
to be sorted by; e.g. 'timestamp asc, x desc'.
The columns specified must be present in columnNames
. If any alias is given for any
column name, the alias must be used, rather than
the original column name. The default value is
''.
CHUNK_SIZE
: Indicates the number of records per
chunk to be used for this projection.
CHUNK_COLUMN_MAX_MEMORY
: Indicates the target
maximum data size for each column in a chunk to
be used for this projection.
CHUNK_MAX_MEMORY
: Indicates the target maximum
data size for all columns in a chunk to be used
for this projection.
CREATE_INDEXES
: Comma-separated list of columns
on which to create indexes on the projection.
The columns specified must be present in columnNames
. If any alias is given for any
column name, the alias must be used, rather than
the original column name.
TTL
: Sets the TTL of the projection
specified in projectionName
.
SHARD_KEY
: Comma-separated list of the columns
to be sharded on; e.g. 'column1, column2'. The
columns specified must be present in columnNames
. If any alias is given for any
column name, the alias must be used, rather than
the original column name. The default value is
''.
PERSIST
: If TRUE
, then the projection specified in projectionName
will be persisted and will not
expire unless a TTL
is specified. If FALSE
, then the projection will be an in-memory
table and will expire unless a TTL
is specified otherwise.
Supported values:
The default value is FALSE
.
PRESERVE_DICT_ENCODING
: If TRUE
, then columns that were dict encoded in
the source table will be dict encoded in the
projection.
Supported values:
The default value is TRUE
.
RETAIN_PARTITIONS
: Determines whether the
created projection will retain the partitioning
scheme from the source table.
Supported values:
The default value is FALSE
.
PARTITION_TYPE
: Partitioning scheme to use.
Supported values:
RANGE
: Use range partitioning.
INTERVAL
: Use interval partitioning.
LIST
: Use list partitioning.
HASH
: Use hash partitioning.
SERIES
: Use series partitioning.
PARTITION_KEYS
: Comma-separated list of
partition keys, which are the columns or column
expressions by which records will be assigned to
partitions defined by PARTITION_DEFINITIONS
.
PARTITION_DEFINITIONS
: Comma-separated list of
partition definitions, whose format depends on
the choice of PARTITION_TYPE
. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for
example formats.
IS_AUTOMATIC_PARTITION
: If TRUE
, a new partition will be created for
values which don't fall into an existing
partition. Currently only supported for list partitions.
Supported values:
The default value is FALSE
.
VIEW_ID
: ID of view of which this projection is
a member. The default value is ''.
STRATEGY_DEFINITION
: The tier strategy for the table
and its columns.
JOIN_WINDOW_FUNCTIONS
: If set, window functions
which require a reshard will be computed
separately and joined back together, if the
width of the projection is greater than the
join_window_functions_threshold. The default
value is 'true'.
JOIN_WINDOW_FUNCTIONS_THRESHOLD
: If the
projection is greater than this width (in
bytes), then window functions which require a
reshard will be computed separately and joined
back together. The default value is ''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateResourceGroupResponse createResourceGroup(CreateResourceGroupRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateResourceGroupResponse createResourceGroup(String name, Map<String,Map<String,String>> tierAttributes, String ranking, String adjoiningResourceGroup, Map<String,String> options) throws GPUdbException
name
- Name of the group to be created. Must contain only letters,
digits, and underscores, and cannot begin with a digit.
Must not match existing resource group name.tierAttributes
- Optional map containing tier names and their
respective attribute group limits. The only
valid attribute limit that can be set is
max_memory (in bytes) for the VRAM & RAM tiers.
For instance, to set max VRAM capacity to 1GB and
max RAM capacity to 10GB, use:
{'VRAM':{'max_memory':'1000000000'},
'RAM':{'max_memory':'10000000000'}}.
MAX_MEMORY
: Maximum amount of memory
usable in the given tier at one time for
this group.
Map
.ranking
- Indicates the relative ranking among existing resource
groups where this new resource group will be placed.
When using BEFORE
or AFTER
, specify which resource group this one will be
inserted before or after in adjoiningResourceGroup
.
Supported values:
adjoiningResourceGroup
- If ranking
is BEFORE
or AFTER
, this field indicates the resource
group before or after which the current
group will be placed; otherwise, leave
blank. The default value is ''.options
- Optional parameters.
MAX_CPU_CONCURRENCY
: Maximum number of
simultaneous threads that will be used to
execute a request for this group. The minimum
allowed value is '4'.
MAX_DATA
: Maximum amount of cumulative ram
usage regardless of tier status for this group.
The minimum allowed value is '-1'.
MAX_SCHEDULING_PRIORITY
: Maximum priority of a
scheduled task for this group. The minimum
allowed value is '1'. The maximum allowed value
is '100'.
MAX_TIER_PRIORITY
: Maximum priority of a tiered
object for this group. The minimum allowed value
is '1'. The maximum allowed value is '10'.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateRoleResponse createRole(CreateRoleRequest request) throws GPUdbException
Note: This method should be used for on-premise deployments only.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateRoleResponse createRole(String name, Map<String,String> options) throws GPUdbException
Note: This method should be used for on-premise deployments only.
name
- Name of the role to be created. Must contain only lowercase
letters, digits, and underscores, and cannot begin with a
digit. Must not be the same name as an existing user or
role.options
- Optional parameters.
RESOURCE_GROUP
: Name of an existing resource
group to associate with this user
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateSchemaResponse createSchema(CreateSchemaRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateSchemaResponse createSchema(String schemaName, Map<String,String> options) throws GPUdbException
schemaName
- Name of the schema to be created. Has the same
naming restrictions as tables.options
- Optional parameters.
NO_ERROR_IF_EXISTS
: If TRUE
, prevents an error from occurring if the
schema already exists.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateStateTableResponse createStateTable(CreateStateTableRequest request) throws GPUdbException
GPUdbException
public CreateStateTableResponse createStateTable(String tableName, String inputTableName, String initTableName, Map<String,String> options) throws GPUdbException
GPUdbException
public CreateTableResponse createTable(CreateTableRequest request) throws GPUdbException
typeId
, which must be
the ID of a currently registered type (i.e. one created via createType
).
A table may optionally be designated to use a replicated distribution scheme, or be assigned: foreign keys to other tables, a partitioning scheme, and/or a tier strategy.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTableResponse createTable(String tableName, String typeId, Map<String,String> options) throws GPUdbException
typeId
, which must be the ID of a currently
registered type (i.e. one created via createType
).
A table may optionally be designated to use a replicated distribution scheme, or be assigned: foreign keys to other tables, a partitioning scheme, and/or a tier strategy.
tableName
- Name of the table to be created, in
[schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria. Error for
requests with existing table of the same name and type
ID may be suppressed by using the NO_ERROR_IF_EXISTS
option.typeId
- ID of a currently registered type. All objects added to
the newly created table will be of this type.options
- Optional parameters.
NO_ERROR_IF_EXISTS
: If TRUE
, prevents an error from occurring if the
table already exists and is of the given type.
If a table with the same ID but a different type
exists, it is still an error.
Supported values:
The default value is FALSE
.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of tableName
. If IS_RESULT_TABLE
is TRUE
, then this is always allowed even if the
caller does not have permission to create
tables. The generated name is returned in QUALIFIED_TABLE_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema as part of tableName
and use createSchema
to
create the schema if non-existent] Name of a
schema which is to contain the newly created
table. If the schema is non-existent, it will be
automatically created.
IS_COLLECTION
: [DEPRECATED--please use createSchema
to
create a schema instead] Indicates whether to
create a schema instead of a table.
Supported values:
The default value is FALSE
.
DISALLOW_HOMOGENEOUS_TABLES
: No longer
supported; value will be ignored.
Supported values:
The default value is FALSE
.
IS_REPLICATED
: Affects the distribution scheme for the
table's data. If TRUE
and the given type has no explicit shard key defined, the table
will be replicated. If FALSE
, the table will be sharded according to the shard
key specified in the given typeId
, or randomly sharded, if no shard
key is specified. Note that a type containing a
shard key cannot be used to create a replicated
table.
Supported values:
The default value is FALSE
.
FOREIGN_KEYS
: Semicolon-separated list of foreign keys, of the format
'(source_column_name [, ...]) references
target_table_name(primary_key_column_name [,
...]) [as foreign_key_name]'.
FOREIGN_SHARD_KEY
: Foreign shard key of the
format 'source_column references shard_by_column
from target_table(primary_key_column)'.
PARTITION_TYPE
: Partitioning scheme to use.
Supported values:
RANGE
: Use range partitioning.
INTERVAL
: Use interval partitioning.
LIST
: Use list partitioning.
HASH
: Use hash partitioning.
SERIES
: Use series partitioning.
PARTITION_KEYS
: Comma-separated list of
partition keys, which are the columns or column
expressions by which records will be assigned to
partitions defined by PARTITION_DEFINITIONS
.
PARTITION_DEFINITIONS
: Comma-separated list of
partition definitions, whose format depends on
the choice of PARTITION_TYPE
. See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for
example formats.
IS_AUTOMATIC_PARTITION
: If TRUE
, a new partition will be created for
values which don't fall into an existing
partition. Currently only supported for list partitions.
Supported values:
The default value is FALSE
.
TTL
: Sets the TTL of the table specified in
tableName
.
CHUNK_SIZE
: Indicates the number of records per
chunk to be used for this table.
CHUNK_COLUMN_MAX_MEMORY
: Indicates the target
maximum data size for each column in a chunk to
be used for this table.
CHUNK_MAX_MEMORY
: Indicates the target maximum
data size for all columns in a chunk to be used
for this table.
IS_RESULT_TABLE
: Indicates whether the table is
a memory-only table. A result
table cannot contain columns with store_only or
text_search data-handling or that are non-charN strings, and it will
not be retained if the server is restarted.
Supported values:
The default value is FALSE
.
STRATEGY_DEFINITION
: The tier strategy for the table
and its columns.
LOAD_VECTORS_POLICY
: Set startup data loading
scheme for the table.
Supported values:
ALWAYS
: Load as much vector data as
possible into memory before accepting
requests.
LAZY
: Load the necessary vector data at
start, and load the remainder lazily.
ON_DEMAND
: Load vector data as requests
use it.
SYSTEM
: Load vector data using the
system-configured default.
SYSTEM
.
BUILD_PK_INDEX_POLICY
: Set startup primary-key
index generation scheme for the table.
Supported values:
ALWAYS
: Generate as much primary key
index data as possible before accepting
requests.
LAZY
: Generate the necessary primary
key index data at start, and load the
remainder lazily.
ON_DEMAND
: Generate primary key index
data as requests use it.
SYSTEM
: Generate primary key index data
using the system-configured default.
SYSTEM
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTableExternalResponse createTableExternal(CreateTableExternalRequest request) throws GPUdbException
The external table can have its structure defined explicitly, via createTableOptions
, which contains many of the options from createTable
; or defined
implicitly, inferred from the source data.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTableExternalResponse createTableExternal(String tableName, List<String> filepaths, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options) throws GPUdbException
The external table can have its structure defined explicitly, via createTableOptions
, which contains many of the options from createTable
; or defined
implicitly, inferred from the source data.
tableName
- Name of the table to be created, in
[schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.filepaths
- A list of file paths from which data will be sourced;
For paths in KiFS, use the uri prefix of kifs://
followed by the path to a file or directory. File
matching by prefix is supported, e.g. kifs://dir/file
would match dir/file_1 and dir/file_2. When prefix
matching is used, the path must start with a full,
valid KiFS directory name. If an external data source
is specified in DATASOURCE_NAME
, these file paths must resolve to
accessible files at that data source location. Prefix
matching is supported. If the data source is hdfs,
prefixes must be aligned with directories, i.e.
partial file names will not match. If no data source
is specified, the files are assumed to be local to the
database and must all be accessible to the gpudb user,
residing on the path (or relative to the path)
specified by the external files directory in the
Kinetica configuration file. Wildcards (*)
can be used to specify a group of files. Prefix
matching is supported, the prefixes must be aligned
with directories. If the first path ends in .tsv, the
text delimiter will be defaulted to a tab character.
If the first path ends in .psv, the text delimiter
will be defaulted to a pipe character (|).modifyColumns
- Not implemented yet. The default value is an empty
Map
.createTableOptions
- Options from createTable
, allowing the
structure of the table to be defined
independently of the data source.
TYPE_ID
: ID of a currently
registered type.
NO_ERROR_IF_EXISTS
: If TRUE
, prevents an error from
occurring if the table already exists
and is of the given type. If a table
with the same name but a different
type exists, it is still an error.
Supported values:
The default value is FALSE
.
IS_REPLICATED
: Affects the distribution scheme
for the table's data. If TRUE
and the given table has no
explicit shard key defined,
the table will be replicated. If
FALSE
, the table will be sharded according
to the shard key specified in the
given TYPE_ID
, or randomly sharded,
if no shard key is specified. Note
that a type containing a shard key
cannot be used to create a replicated
table.
Supported values:
The default value is FALSE
.
FOREIGN_KEYS
: Semicolon-separated
list of foreign keys, of
the format '(source_column_name [,
...]) references
target_table_name(primary_key_column_name
[, ...]) [as foreign_key_name]'.
FOREIGN_SHARD_KEY
: Foreign shard key
of the format 'source_column
references shard_by_column from
target_table(primary_key_column)'.
PARTITION_TYPE
: Partitioning scheme
to use.
Supported values:
RANGE
: Use range
partitioning.
INTERVAL
: Use interval
partitioning.
LIST
: Use list
partitioning.
HASH
: Use hash
partitioning.
SERIES
: Use series
partitioning.
PARTITION_KEYS
: Comma-separated list
of partition keys, which are the
columns or column expressions by
which records will be assigned to
partitions defined by PARTITION_DEFINITIONS
.
PARTITION_DEFINITIONS
:
Comma-separated list of partition
definitions, whose format depends on
the choice of PARTITION_TYPE
. See range partitioning,
interval
partitioning, list partitioning,
hash partitioning,
or series partitioning
for example formats.
IS_AUTOMATIC_PARTITION
: If TRUE
, a new partition will be
created for values which don't fall
into an existing partition.
Currently, only supported for list partitions.
Supported values:
The default value is FALSE
.
TTL
: Sets the TTL of the table
specified in tableName
.
CHUNK_SIZE
: Indicates the number of
records per chunk to be used for this
table.
CHUNK_COLUMN_MAX_MEMORY
: Indicates
the target maximum data size for each
column in a chunk to be used for this
table.
CHUNK_MAX_MEMORY
: Indicates the
target maximum data size for all
columns in a chunk to be used for
this table.
IS_RESULT_TABLE
: Indicates whether
the table is a memory-only table.
A result table cannot contain columns
with text_search data-handling, and
it will not be retained if the server
is restarted.
Supported values:
The default value is FALSE
.
STRATEGY_DEFINITION
: The tier strategy for
the table and its columns.
Map
.options
- Optional parameters.
BAD_RECORD_TABLE_NAME
: Name of a table to which
records that were rejected are written. The
bad-record-table has the following columns:
line_number (long), line_rejected (string),
error_message (string). When ERROR_HANDLING
is ABORT
, bad records table is not populated.
BAD_RECORD_TABLE_LIMIT
: A positive integer
indicating the maximum number of records that
can be written to the bad-record-table. The
default value is '10000'.
BAD_RECORD_TABLE_LIMIT_PER_INPUT
: For
subscriptions, a positive integer indicating the
maximum number of records that can be written to
the bad-record-table per file/payload. Default
value will be BAD_RECORD_TABLE_LIMIT
and total size of the
table per rank is limited to BAD_RECORD_TABLE_LIMIT
.
BATCH_SIZE
: Number of records to insert per
batch when inserting data. The default value is
'50000'.
COLUMN_FORMATS
: For each target column
specified, applies the column-property-bound
format to the source data loaded into that
column. Each column format will contain a
mapping of one or more of its column properties
to an appropriate format for each property.
Currently supported column properties include
date, time, & datetime. The parameter value must
be formatted as a JSON string of maps of column
names to maps of column properties to their
corresponding column formats, e.g., '{
"order_date" : { "date" : "%Y.%m.%d" },
"order_time" : { "time" : "%H:%M:%S" } }'. See
DEFAULT_COLUMN_FORMATS
for valid format syntax.
COLUMNS_TO_LOAD
: Specifies a comma-delimited
list of columns from the source data to load.
If more than one file is being loaded, this list
applies to all files. Column numbers can be
specified discretely or as a range. For
example, a value of '5,7,1..3' will insert
values from the fifth column in the source data
into the first column in the target table, from
the seventh column in the source data into the
second column in the target table, and from the
first through third columns in the source data
into the third through fifth columns in the
target table. If the source data contains a
header, column names matching the file header
names may be provided instead of column numbers.
If the target table doesn't exist, the table
will be created with the columns in this order.
If the target table does exist with columns in a
different order than the source data, this list
can be used to match the order of the target
table. For example, a value of 'C, B, A' will
create a three column table with column C,
followed by column B, followed by column A; or
will insert those fields in that order into a
table created with columns in that order. If
the target table exists, the column names must
match the source data field names for a
name-mapping to be successful. Mutually
exclusive with COLUMNS_TO_SKIP
.
COLUMNS_TO_SKIP
: Specifies a comma-delimited
list of columns from the source data to skip.
Mutually exclusive with COLUMNS_TO_LOAD
.
COMPRESSION_TYPE
: Source data compression type.
Supported values:
NONE
: No compression.
AUTO
: Auto detect compression type
GZIP
: gzip file compression.
BZIP2
: bzip2 file compression.
AUTO
.
DATASOURCE_NAME
: Name of an existing external
data source from which data file(s) specified in
filepaths
will be loaded
DEFAULT_COLUMN_FORMATS
: Specifies the default
format to be applied to source data loaded into
columns with the corresponding column property.
Currently supported column properties include
date, time, & datetime. This default
column-property-bound format can be overridden
by specifying a column property & format for a
given target column in COLUMN_FORMATS
. For each specified annotation,
the format will apply to all columns with that
annotation unless a custom COLUMN_FORMATS
for that annotation is
specified. The parameter value must be
formatted as a JSON string that is a map of
column properties to their respective column
formats, e.g., '{ "date" : "%Y.%m.%d", "time" :
"%H:%M:%S" }'. Column formats are specified as
a string of control characters and plain text.
The supported control characters are 'Y', 'm',
'd', 'H', 'M', 'S', and 's', which follow the
Linux 'strptime()' specification, as well as
's', which specifies seconds and fractional
seconds (though the fractional component will be
truncated past milliseconds). Formats for the
'date' annotation must include the 'Y', 'm', and
'd' control characters. Formats for the 'time'
annotation must include the 'H', 'M', and either
'S' or 's' (but not both) control characters.
Formats for the 'datetime' annotation meet both
the 'date' and 'time' control character
requirements. For example, '{"datetime" :
"%m/%d/%Y %H:%M:%S" }' would be used to
interpret text as "05/04/2000 12:12:11"
ERROR_HANDLING
: Specifies how errors should be
handled upon insertion.
Supported values:
PERMISSIVE
: Records with missing
columns are populated with nulls if
possible; otherwise, the malformed
records are skipped.
IGNORE_BAD_RECORDS
: Malformed records
are skipped.
ABORT
: Stops current insertion and
aborts entire operation when an error is
encountered. Primary key collisions are
considered abortable errors in this
mode.
ABORT
.
EXTERNAL_TABLE_TYPE
: Specifies whether the
external table holds a local copy of the
external data.
Supported values:
MATERIALIZED
: Loads a copy of the
external data into the database,
refreshed on demand
LOGICAL
: External data will not be
loaded into the database; the data will
be retrieved from the source upon
servicing each query against the
external table
MATERIALIZED
.
FILE_TYPE
: Specifies the type of the file(s)
whose records will be inserted.
Supported values:
AVRO
: Avro file format
DELIMITED_TEXT
: Delimited text file
format; e.g., CSV, TSV, PSV, etc.
GDB
: Esri/GDB file format
JSON
: Json file format
PARQUET
: Apache Parquet file format
SHAPEFILE
: ShapeFile file format
DELIMITED_TEXT
.
FLATTEN_COLUMNS
: Specifies how to handle nested
columns.
Supported values:
TRUE
: Break up nested columns to
multiple columns
FALSE
: Treat nested columns as json
columns instead of flattening
FALSE
.
GDAL_CONFIGURATION_OPTIONS
: Comma separated
list of gdal conf options, for the specific
requets: key=value
IGNORE_EXISTING_PK
: Specifies the record
collision error-suppression policy for inserting
into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
UPDATE_ON_EXISTING_PK
is FALSE
). If set to TRUE
, any record being inserted that is
rejected for having primary key values that
match those of an existing table record will be
ignored with no error generated. If FALSE
, the rejection of any record for having
primary key values matching an existing record
will result in an error being reported, as
determined by ERROR_HANDLING
. If the specified table does
not have a primary key or if upsert mode is in
effect (UPDATE_ON_EXISTING_PK
is TRUE
), then this option has no effect.
Supported values:
TRUE
: Ignore new records whose primary
key values collide with those of
existing records
FALSE
: Treat as errors any new records
whose primary key values collide with
those of existing records
FALSE
.
INGESTION_MODE
: Whether to do a full load, dry
run, or perform a type inference on the source
data.
Supported values:
FULL
: Run a type inference on the
source data (if needed) and ingest
DRY_RUN
: Does not load data, but walks
through the source data and determines
the number of valid records, taking into
account the current mode of ERROR_HANDLING
.
TYPE_INFERENCE_ONLY
: Infer the type of
the source data and return, without
ingesting any data. The inferred type
is returned in the response.
FULL
.
JDBC_FETCH_SIZE
: The JDBC fetch size, which
determines how many rows to fetch per round
trip. The default value is '50000'.
KAFKA_CONSUMERS_PER_RANK
: Number of Kafka
consumer threads per rank (valid range 1-6). The
default value is '1'.
KAFKA_GROUP_ID
: The group id to be used when
consuming data from a Kafka topic (valid only
for Kafka datasource subscriptions).
KAFKA_OFFSET_RESET_POLICY
: Policy to determine
whether the Kafka data consumption starts either
at earliest offset or latest offset.
Supported values:
The default value is EARLIEST
.
KAFKA_OPTIMISTIC_INGEST
: Enable optimistic
ingestion where Kafka topic offsets and table
data are committed independently to achieve
parallelism.
Supported values:
The default value is FALSE
.
KAFKA_SUBSCRIPTION_CANCEL_AFTER
: Sets the Kafka
subscription lifespan (in minutes). Expired
subscription will be cancelled automatically.
KAFKA_TYPE_INFERENCE_FETCH_TIMEOUT
: Maximum
time to collect Kafka messages before type
inferencing on the set of them.
LAYER
: Geo files layer(s) name(s): comma
separated.
LOADING_MODE
: Scheme for distributing the
extraction and loading of data from the source
data file(s). This option applies only when
loading files that are local to the database.
Supported values:
HEAD
: The head node loads all data. All
files must be available to the head
node.
DISTRIBUTED_SHARED
: The head node
coordinates loading data by worker
processes across all nodes from shared
files available to all workers. NOTE:
Instead of existing on a shared source,
the files can be duplicated on a source
local to each host to improve
performance, though the files must
appear as the same data set from the
perspective of all hosts performing the
load.
DISTRIBUTED_LOCAL
: A single worker
process on each node loads all files
that are available to it. This option
works best when each worker loads files
from its own file system, to maximize
performance. In order to avoid data
duplication, either each worker
performing the load needs to have
visibility to a set of files unique to
it (no file is visible to more than one
node) or the target table needs to have
a primary key (which will allow the
worker to automatically deduplicate
data). NOTE: If the target table
doesn't exist, the table structure will
be determined by the head node. If the
head node has no files local to it, it
will be unable to determine the
structure and the request will fail. If
the head node is configured to have no
worker processes, no data strictly
accessible to the head node will be
loaded.
HEAD
.
LOCAL_TIME_OFFSET
: Apply an offset to Avro
local timestamp columns.
MAX_RECORDS_TO_LOAD
: Limit the number of
records to load in this request: if this number
is larger than BATCH_SIZE
, then the number of records loaded
will be limited to the next whole number of
BATCH_SIZE
(per working thread).
NUM_TASKS_PER_RANK
: Number of tasks for reading
file per rank. Default will be system
configuration parameter,
external_file_reader_num_tasks.
POLL_INTERVAL
: If TRUE
, the number of seconds between attempts to
load external files into the table. If zero,
polling will be continuous as long as data is
found. If no data is found, the interval will
steadily increase to a maximum of 60 seconds.
The default value is '0'.
PRIMARY_KEYS
: Comma separated list of column
names to set as primary keys, when not specified
in the type.
REFRESH_METHOD
: Method by which the table can
be refreshed from its source data.
Supported values:
MANUAL
: Refresh only occurs when
manually requested by invoking the
refresh action of alterTable
on this table.
ON_START
: Refresh table on database
startup and when manually requested by
invoking the refresh action of alterTable
on this table.
MANUAL
.
SCHEMA_REGISTRY_SCHEMA_NAME
: Name of the Avro
schema in the schema registry to use when
reading Avro records.
SHARD_KEYS
: Comma separated list of column
names to set as shard keys, when not specified
in the type.
SKIP_LINES
: Skip number of lines from begining
of file.
START_OFFSETS
: Starting offsets by partition to
fetch from kafka. A comma separated list of
partition:offset pairs.
SUBSCRIBE
: Continuously poll the data source to
check for new data and load it into the table.
Supported values:
The default value is FALSE
.
TABLE_INSERT_MODE
: Insertion scheme to use when
inserting records from multiple shapefiles.
Supported values:
SINGLE
: Insert all records into a
single table.
TABLE_PER_FILE
: Insert records from
each file into a new table corresponding
to that file.
SINGLE
.
TEXT_COMMENT_STRING
: Specifies the character
string that should be interpreted as a comment
line prefix in the source data. All lines in
the data starting with the provided string are
ignored. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '#'.
TEXT_DELIMITER
: Specifies the character
delimiting field values in the source data and
field names in the header (if present). For
DELIMITED_TEXT
FILE_TYPE
only. The default value is ','.
TEXT_ESCAPE_CHARACTER
: Specifies the character
that is used to escape other characters in the
source data. An 'a', 'b', 'f', 'n', 'r', 't',
or 'v' preceded by an escape character will be
interpreted as the ASCII bell, backspace, form
feed, line feed, carriage return, horizontal
tab, & vertical tab, respectively. For example,
the escape character followed by an 'n' will be
interpreted as a newline within a field value.
The escape character can also be used to escape
the quoting character, and will be treated as an
escape character whether it is within a quoted
field value or not. For DELIMITED_TEXT
FILE_TYPE
only.
TEXT_HAS_HEADER
: Indicates whether the source
data contains a header row. For DELIMITED_TEXT
FILE_TYPE
only.
Supported values:
The default value is TRUE
.
TEXT_HEADER_PROPERTY_DELIMITER
: Specifies the
delimiter for column properties in the
header row (if present). Cannot be set to same
value as TEXT_DELIMITER
. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '|'.
TEXT_NULL_STRING
: Specifies the character
string that should be interpreted as a null
value in the source data. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '\N'.
TEXT_QUOTE_CHARACTER
: Specifies the character
that should be interpreted as a field value
quoting character in the source data. The
character must appear at beginning and end of
field value to take effect. Delimiters within
quoted fields are treated as literals and not
delimiters. Within a quoted field, two
consecutive quote characters will be interpreted
as a single literal quote character, effectively
escaping it. To not have a quote character,
specify an empty string. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '"'.
TEXT_SEARCH_COLUMNS
: Add 'text_search' property
to internally inferenced string columns. Comma
seperated list of column names or '*' for all
columns. To add 'text_search' property only to
string columns greater than or equal to a
minimum size, also set the TEXT_SEARCH_MIN_COLUMN_LENGTH
TEXT_SEARCH_MIN_COLUMN_LENGTH
: Set the minimum
column size for strings to apply the
'text_search' property to. Used only when TEXT_SEARCH_COLUMNS
has a value.
TRUNCATE_STRINGS
: If set to TRUE
, truncate string values that are longer
than the column's type size.
Supported values:
The default value is FALSE
.
TRUNCATE_TABLE
: If set to TRUE
, truncates the table specified by tableName
prior to loading the file(s).
Supported values:
The default value is FALSE
.
TYPE_INFERENCE_MODE
: Optimize type inferencing
for either speed or accuracy.
Supported values:
ACCURACY
: Scans data to get
exactly-typed & sized columns for all
data scanned.
SPEED
: Scans data and picks the widest
possible column types so that 'all'
values will fit with minimum data
scanned
SPEED
.
REMOTE_QUERY
: Remote SQL query from which data
will be sourced
REMOTE_QUERY_FILTER_COLUMN
: Name of column to
be used for splitting REMOTE_QUERY
into multiple sub-queries using
the data distribution of given column
REMOTE_QUERY_INCREASING_COLUMN
: Column on
subscribed remote query result that will
increase for new records (e.g., TIMESTAMP).
REMOTE_QUERY_PARTITION_COLUMN
: Alias name for
REMOTE_QUERY_FILTER_COLUMN
.
UPDATE_ON_EXISTING_PK
: Specifies the record
collision policy for inserting into a table with
a primary key. If set to TRUE
, any existing table record with primary
key values that match those of a record being
inserted will be replaced by that new record
(the new data will be 'upserted'). If set to
FALSE
, any existing table record with primary
key values that match those of a record being
inserted will remain unchanged, while the new
record will be rejected and the error handled as
determined by IGNORE_EXISTING_PK
& ERROR_HANDLING
. If the specified table does
not have a primary key, then this option has no
effect.
Supported values:
TRUE
: Upsert new records when primary
keys match existing records
FALSE
: Reject new records when primary
keys match existing records
FALSE
.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTableMonitorResponse createTableMonitor(CreateTableMonitorRequest request) throws GPUdbException
tableName
) and forwards event notifications to subscribers via ZMQ.
After this call completes, subscribe to the returned topicId
on
the ZMQ table monitor port (default 9002). Each time an operation of the
given type on the table completes, a multipart message is published for
that topic; the first part contains only the topic ID, and each
subsequent part contains one binary-encoded Avro object that corresponds
to the event and can be decoded using typeSchema
. The monitor will continue to run (regardless of whether or
not there are any subscribers) until deactivated with clearTableMonitor
.
For more information on table monitors, see Table Monitors.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTableMonitorResponse createTableMonitor(String tableName, Map<String,String> options) throws GPUdbException
tableName
) and forwards event notifications to subscribers via
ZMQ. After this call completes, subscribe to the returned topicId
on
the ZMQ table monitor port (default 9002). Each time an operation of the
given type on the table completes, a multipart message is published for
that topic; the first part contains only the topic ID, and each
subsequent part contains one binary-encoded Avro object that corresponds
to the event and can be decoded using typeSchema
. The monitor will continue to run (regardless of whether or
not there are any subscribers) until deactivated with clearTableMonitor
.
For more information on table monitors, see Table Monitors.
tableName
- Name of the table to monitor, in
[schema_name.]table_name format, using standard name resolution rules.options
- Optional parameters.
EVENT
: Type of modification event on the target
table to be monitored by this table monitor.
Supported values:
INSERT
: Get notifications of new record
insertions. The new row images are
forwarded to the subscribers.
UPDATE
: Get notifications of update
operations. The modified row count
information is forwarded to the
subscribers.
DELETE
: Get notifications of delete
operations. The deleted row count
information is forwarded to the
subscribers.
INSERT
.
MONITOR_ID
: ID to use for this monitor instead
of a randomly generated one
DATASINK_NAME
: Name of an existing data sink to send change data
notifications to
DESTINATION
: Destination for the output data in
format 'destination_type://path[:port]'.
Supported destination types are 'http', 'https'
and 'kafka'.
KAFKA_TOPIC_NAME
: Name of the Kafka topic to
publish to if DESTINATION
in options
is specified and
is a Kafka broker
INCREASING_COLUMN
: Column on subscribed table
that will increase for new records (e.g.,
TIMESTAMP).
EXPRESSION
: Filter expression to limit records
for notification
REFRESH_METHOD
: Method controlling when the
table monitor reports changes to the tableName
.
Supported values:
ON_CHANGE
: Report changes as they
occur.
PERIODIC
: Report changes periodically
at rate specified by REFRESH_PERIOD
.
ON_CHANGE
.
REFRESH_PERIOD
: When REFRESH_METHOD
is PERIODIC
, specifies the period in seconds at
which changes are reported.
REFRESH_START_TIME
: When REFRESH_METHOD
is PERIODIC
, specifies the first time at which
changes are reported. Value is a datetime
string with format 'YYYY-MM-DD HH:MM:SS'.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTriggerByAreaResponse createTriggerByArea(CreateTriggerByAreaRequest request) throws GPUdbException
createTriggerByRange
.) Once the trigger has been activated, any record
added to the listed tables(s) via insertRecords
with the chosen
columns' values falling within the specified region will trip the
trigger. All such records will be queued at the trigger port (by default
'9001' but able to be retrieved via showSystemStatus
) for
any listening client to collect. Active triggers can be cancelled by
using the clearTrigger
endpoint or by clearing all relevant tables.
The output returns the trigger handle as well as indicating success or failure of the trigger activation.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTriggerByAreaResponse createTriggerByArea(String requestId, List<String> tableNames, String xColumnName, List<Double> xVector, String yColumnName, List<Double> yVector, Map<String,String> options) throws GPUdbException
createTriggerByRange
.) Once the trigger has been activated, any
record added to the listed tables(s) via insertRecords
with the chosen
columns' values falling within the specified region will trip the
trigger. All such records will be queued at the trigger port (by default
'9001' but able to be retrieved via showSystemStatus
) for any listening client to collect. Active triggers
can be cancelled by using the clearTrigger
endpoint or by clearing all relevant tables.
The output returns the trigger handle as well as indicating success or failure of the trigger activation.
requestId
- User-created ID for the trigger. The ID can be
alphanumeric, contain symbols, and must contain at
least one character.tableNames
- Names of the tables on which the trigger will be
activated and maintained, each in
[schema_name.]table_name format, using standard name resolution rules.xColumnName
- Name of a numeric column on which the trigger is
activated. Usually 'x' for geospatial data points.xVector
- The respective coordinate values for the region on which
the trigger is activated. This usually translates to the
x-coordinates of a geospatial region.yColumnName
- Name of a second numeric column on which the trigger
is activated. Usually 'y' for geospatial data
points.yVector
- The respective coordinate values for the region on which
the trigger is activated. This usually translates to the
y-coordinates of a geospatial region. Must be the same
length as xvals.options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTriggerByRangeResponse createTriggerByRange(CreateTriggerByRangeRequest request) throws GPUdbException
insertRecords
with the chosen column_name's value falling within the
specified range will trip the trigger. All such records will be queued
at the trigger port (by default '9001' but able to be retrieved via
showSystemStatus
) for any listening client to collect. Active triggers
can be cancelled by using the clearTrigger
endpoint or by
clearing all relevant tables.
The output returns the trigger handle as well as indicating success or failure of the trigger activation.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTriggerByRangeResponse createTriggerByRange(String requestId, List<String> tableNames, String columnName, double min, double max, Map<String,String> options) throws GPUdbException
insertRecords
with the chosen column_name's value falling within the
specified range will trip the trigger. All such records will be queued
at the trigger port (by default '9001' but able to be retrieved via
showSystemStatus
) for any listening
client to collect. Active triggers can be cancelled by using the clearTrigger
endpoint or by clearing
all relevant tables.
The output returns the trigger handle as well as indicating success or failure of the trigger activation.
requestId
- User-created ID for the trigger. The ID can be
alphanumeric, contain symbols, and must contain at
least one character.tableNames
- Tables on which the trigger will be active, each in
[schema_name.]table_name format, using standard name resolution rules.columnName
- Name of a numeric column_name on which the trigger is
activated.min
- The lower bound (inclusive) for the trigger range.max
- The upper bound (inclusive) for the trigger range.options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTypeResponse createType(CreateTypeRequest request) throws GPUdbException
DATA
and
STORE_ONLY
.
A single primary key and/or single shard
key can be set across one or more columns. If a primary key is
specified, then a uniqueness constraint is enforced, in that only a
single object can exist with a given primary key column value (or set of
values for the key columns, if using a composite primary key). When
inserting
data into a
table with a primary key, depending on the parameters in the request,
incoming objects with primary key values that match existing objects
will either overwrite (i.e. update) the existing object or will be
skipped and not added into the set.
Example of a type definition with some of the parameters:
{"type":"record", "name":"point", "fields":[{"name":"msg_id","type":"string"}, {"name":"x","type":"double"}, {"name":"y","type":"double"}, {"name":"TIMESTAMP","type":"double"}, {"name":"source","type":"string"}, {"name":"group_id","type":"string"}, {"name":"OBJECT_ID","type":"string"}] }Properties:
{"group_id":["store_only"], "msg_id":["store_only","text_search"] }
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateTypeResponse createType(String typeDefinition, String label, Map<String,List<String>> properties, Map<String,String> options) throws GPUdbException
DATA
and
STORE_ONLY
.
A single primary key and/or single shard
key can be set across one or more columns. If a primary key is
specified, then a uniqueness constraint is enforced, in that only a
single object can exist with a given primary key column value (or set of
values for the key columns, if using a composite primary key). When
inserting
data into a
table with a primary key, depending on the parameters in the request,
incoming objects with primary key values that match existing objects
will either overwrite (i.e. update) the existing object or will be
skipped and not added into the set.
Example of a type definition with some of the parameters:
{"type":"record", "name":"point", "fields":[{"name":"msg_id","type":"string"}, {"name":"x","type":"double"}, {"name":"y","type":"double"}, {"name":"TIMESTAMP","type":"double"}, {"name":"source","type":"string"}, {"name":"group_id","type":"string"}, {"name":"OBJECT_ID","type":"string"}] }Properties:
{"group_id":["store_only"], "msg_id":["store_only","text_search"] }
typeDefinition
- a JSON string describing the columns of the type
to be registered.label
- A user-defined description string which can be used to
differentiate between tables and types with otherwise
identical schemas.properties
- Each key-value pair specifies the properties to use
for a given column where the key is the column name.
All keys used must be relevant column names for the
given table. Specifying any property overrides the
default properties for that column (which is based on
the column's data type).
Valid values are:
DATA
: Default property for all numeric and
string type columns; makes the column
available for GPU queries.
TEXT_SEARCH
: Valid only for select 'string'
columns. Enables full text search--see Full Text Search for
details and applicable string column types.
Can be set independently of DATA
and STORE_ONLY
.
STORE_ONLY
: Persist the column value but do
not make it available to queries (e.g. filter
)-i.e. it is mutually exclusive to the
DATA
property. Any 'bytes' type column must
have a STORE_ONLY
property. This property reduces
system memory usage.
DISK_OPTIMIZED
: Works in conjunction with
the DATA
property for string columns. This
property reduces system disk usage by
disabling reverse string lookups. Queries
like filter
, filterByList
, and filterByValue
work as usual but aggregateUnique
and aggregateGroupBy
are not allowed
on columns with this property.
TIMESTAMP
: Valid only for 'long' columns.
Indicates that this field represents a
timestamp and will be provided in
milliseconds since the Unix epoch: 00:00:00
Jan 1 1970. Dates represented by a timestamp
must fall between the year 1000 and the year
2900.
ULONG
: Valid only for 'string' columns. It
represents an unsigned long integer data
type. The string can only be interpreted as
an unsigned long data type with minimum value
of zero, and maximum value of
18446744073709551615.
UUID
: Valid only for 'string' columns. It
represents an uuid data type. Internally, it
is stored as a 128-bit integer.
DECIMAL
: Valid only for 'string' columns.
It represents a SQL type NUMERIC(19, 4) data
type. There can be up to 15 digits before
the decimal point and up to four digits in
the fractional part. The value can be
positive or negative (indicated by a minus
sign at the beginning). This property is
mutually exclusive with the TEXT_SEARCH
property.
DATE
: Valid only for 'string' columns.
Indicates that this field represents a date
and will be provided in the format
'YYYY-MM-DD'. The allowable range is
1000-01-01 through 2900-01-01. This property
is mutually exclusive with the TEXT_SEARCH
property.
TIME
: Valid only for 'string' columns.
Indicates that this field represents a
time-of-day and will be provided in the
format 'HH:MM:SS.mmm'. The allowable range
is 00:00:00.000 through 23:59:59.999. This
property is mutually exclusive with the
TEXT_SEARCH
property.
DATETIME
: Valid only for 'string' columns.
Indicates that this field represents a
datetime and will be provided in the format
'YYYY-MM-DD HH:MM:SS.mmm'. The allowable
range is 1000-01-01 00:00:00.000 through
2900-01-01 23:59:59.999. This property is
mutually exclusive with the TEXT_SEARCH
property.
CHAR1
: This property provides optimized
memory, disk and query performance for string
columns. Strings with this property must be
no longer than 1 character.
CHAR2
: This property provides optimized
memory, disk and query performance for string
columns. Strings with this property must be
no longer than 2 characters.
CHAR4
: This property provides optimized
memory, disk and query performance for string
columns. Strings with this property must be
no longer than 4 characters.
CHAR8
: This property provides optimized
memory, disk and query performance for string
columns. Strings with this property must be
no longer than 8 characters.
CHAR16
: This property provides optimized
memory, disk and query performance for string
columns. Strings with this property must be
no longer than 16 characters.
CHAR32
: This property provides optimized
memory, disk and query performance for string
columns. Strings with this property must be
no longer than 32 characters.
CHAR64
: This property provides optimized
memory, disk and query performance for string
columns. Strings with this property must be
no longer than 64 characters.
CHAR128
: This property provides optimized
memory, disk and query performance for string
columns. Strings with this property must be
no longer than 128 characters.
CHAR256
: This property provides optimized
memory, disk and query performance for string
columns. Strings with this property must be
no longer than 256 characters.
BOOLEAN
: This property provides optimized
memory and query performance for int columns.
Ints with this property must be between 0 and
1(inclusive)
INT8
: This property provides optimized
memory and query performance for int columns.
Ints with this property must be between -128
and +127 (inclusive)
INT16
: This property provides optimized
memory and query performance for int columns.
Ints with this property must be between
-32768 and +32767 (inclusive)
IPV4
: This property provides optimized
memory, disk and query performance for string
columns representing IPv4 addresses (i.e.
192.168.1.1). Strings with this property must
be of the form: A.B.C.D where A, B, C and D
are in the range of 0-255.
ARRAY
: Valid only for 'string' columns.
Indicates that this field contains an array.
The value type and (optionally) the item
count should be specified in parenthesis;
e.g., 'array(int, 10)' for a 10-integer
array. Both 'array(int)' and 'array(int,
-1)' will designate an unlimited-length
integer array, though no bounds checking is
performed on arrays of any length.
JSON
: Valid only for 'string' columns.
Indicates that this field contains values in
JSON format.
VECTOR
: Valid only for 'bytes' columns.
Indicates that this field contains a vector
of floats. The length should be specified in
parenthesis, e.g., 'vector(1000)'.
WKT
: Valid only for 'string' and 'bytes'
columns. Indicates that this field contains
geospatial geometry objects in Well-Known
Text (WKT) or Well-Known Binary (WKB) format.
PRIMARY_KEY
: This property indicates that
this column will be part of (or the entire)
primary key.
SOFT_PRIMARY_KEY
: This property indicates
that this column will be part of (or the
entire) soft primary key.
SHARD_KEY
: This property indicates that this
column will be part of (or the entire) shard key.
NULLABLE
: This property indicates that this
column is nullable. However, setting this
property is insufficient for making the
column nullable. The user must declare the
type of the column as a union between its
regular type and 'null' in the avro schema
for the record type in typeDefinition
. For example, if a column is
of type integer and is nullable, then the
entry for the column in the avro schema must
be: ['int', 'null']. The C++, C#, Java, and
Python APIs have built-in convenience for
bypassing setting the avro schema by hand.
For those languages, one can use this
property as usual and not have to worry about
the avro schema for the record.
DICT
: This property indicates that this
column should be dictionary encoded. It can
only be used in conjunction with restricted
string (charN), int, long or date columns.
Dictionary encoding is best for columns where
the cardinality (the number of unique values)
is expected to be low. This property can save
a large amount of memory.
INIT_WITH_NOW
: For 'date', 'time',
'datetime', or 'timestamp' column types,
replace empty strings and invalid timestamps
with 'NOW()' upon insert.
INIT_WITH_UUID
: For 'uuid' type, replace
empty strings and invalid UUID values with
randomly-generated UUIDs upon insert.
UPDATE_WITH_NOW
: For 'date', 'time',
'datetime', or 'timestamp' column types,
always update the field with 'NOW()' upon any
update.
Map
.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateUnionResponse createUnion(CreateUnionRequest request) throws GPUdbException
The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations, see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples, see Intersect. For limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see Except. For limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views on a single table, creates a single filtered view containing all of the unique records across all of the given filtered data sets.
Non-charN 'string' and 'bytes' column types cannot be merged, nor can columns marked as store-only.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateUnionResponse createUnion(String tableName, List<String> tableNames, List<List<String>> inputColumnNames, List<String> outputColumnNames, Map<String,String> options) throws GPUdbException
The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations, see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples, see Intersect. For limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see Except. For limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views on a single table, creates a single filtered view containing all of the unique records across all of the given filtered data sets.
Non-charN 'string' and 'bytes' column types cannot be merged, nor can columns marked as store-only.
tableName
- Name of the table to be created, in
[schema_name.]table_name format, using standard name resolution rules and meeting table naming criteria.tableNames
- The list of table names to merge, in
[schema_name.]table_name format, using standard name resolution rules. Must
contain the names of one or more existing tables.inputColumnNames
- The list of columns from each of the
corresponding input tables.outputColumnNames
- The list of names of the columns to be stored
in the output table.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of tableName
. If PERSIST
is FALSE
(or unspecified), then this is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_TABLE_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the projection as part
of tableName
and use createSchema
to
create the schema if non-existent] Name of the
schema for the output table. If the schema
provided is non-existent, it will be
automatically created. The default value is ''.
MODE
: If MERGE_VIEWS
, then this operation will merge the
provided views. All tableNames
must be
views from the same underlying base table.
Supported values:
UNION_ALL
: Retains all rows from the
specified tables.
UNION
: Retains all unique rows from the
specified tables (synonym for UNION_DISTINCT
).
UNION_DISTINCT
: Retains all unique rows
from the specified tables.
EXCEPT
: Retains all unique rows from
the first table that do not appear in
the second table (only works on 2
tables).
EXCEPT_ALL
: Retains all rows(including
duplicates) from the first table that do
not appear in the second table (only
works on 2 tables).
INTERSECT
: Retains all unique rows that
appear in both of the specified tables
(only works on 2 tables).
INTERSECT_ALL
: Retains all
rows(including duplicates) that appear
in both of the specified tables (only
works on 2 tables).
MERGE_VIEWS
: Merge two or more views
(or views of views) of the same base
data set into a new view. If this mode
is selected inputColumnNames
AND
outputColumnNames
must be empty.
The resulting view would match the
results of a SQL OR operation, e.g., if
filter 1 creates a view using the
expression 'x = 20' and filter 2 creates
a view using the expression 'x <=
10', then the merge views operation
creates a new view using the expression
'x = 20 OR x <= 10'.
UNION_ALL
.
CHUNK_SIZE
: Indicates the number of records per
chunk to be used for this output table.
CHUNK_COLUMN_MAX_MEMORY
: Indicates the target
maximum data size for each column in a chunk to
be used for this output table.
CHUNK_MAX_MEMORY
: Indicates the target maximum
data size for all columns in a chunk to be used
for this output table.
CREATE_INDEXES
: Comma-separated list of columns
on which to create indexes on the output table.
The columns specified must be present in outputColumnNames
.
TTL
: Sets the TTL of the output table
specified in tableName
.
PERSIST
: If TRUE
, then the output table specified in tableName
will be persisted and will not expire
unless a TTL
is specified. If FALSE
, then the output table will be an
in-memory table and will expire unless a TTL
is specified otherwise.
Supported values:
The default value is FALSE
.
VIEW_ID
: ID of view of which this output table
is a member. The default value is ''.
FORCE_REPLICATED
: If TRUE
, then the output table specified in tableName
will be replicated even if the source
tables are not.
Supported values:
The default value is FALSE
.
STRATEGY_DEFINITION
: The tier strategy for the table
and its columns.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateUserExternalResponse createUserExternal(CreateUserExternalRequest request) throws GPUdbException
Note: This method should be used for on-premise deployments only.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateUserExternalResponse createUserExternal(String name, Map<String,String> options) throws GPUdbException
Note: This method should be used for on-premise deployments only.
name
- Name of the user to be created. Must exactly match the
user's name in the external LDAP, prefixed with a @. Must
not be the same name as an existing user.options
- Optional parameters.
ACTIVATED
: Is the user allowed to login.
Supported values:
The default value is TRUE
.
CREATE_HOME_DIRECTORY
: When TRUE
, a home directory in KiFS is created for
this user.
Supported values:
The default value is TRUE
.
DEFAULT_SCHEMA
: Default schema to associate
with this user
DIRECTORY_DATA_LIMIT
: The maximum capacity to
apply to the created directory if CREATE_HOME_DIRECTORY
is TRUE
. Set to -1 to indicate no upper limit. If
empty, the system default limit is applied.
RESOURCE_GROUP
: Name of an existing resource
group to associate with this user
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateUserInternalResponse createUserInternal(CreateUserInternalRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateUserInternalResponse createUserInternal(String name, String password, Map<String,String> options) throws GPUdbException
name
- Name of the user to be created. Must contain only lowercase
letters, digits, and underscores, and cannot begin with a
digit. Must not be the same name as an existing user or
role.password
- Initial password of the user to be created. May be an
empty string for no password.options
- Optional parameters.
ACTIVATED
: Is the user allowed to login.
Supported values:
The default value is TRUE
.
CREATE_HOME_DIRECTORY
: When TRUE
, a home directory in KiFS is created for
this user.
Supported values:
The default value is TRUE
.
DEFAULT_SCHEMA
: Default schema to associate
with this user
DIRECTORY_DATA_LIMIT
: The maximum capacity to
apply to the created directory if CREATE_HOME_DIRECTORY
is TRUE
. Set to -1 to indicate no upper limit. If
empty, the system default limit is applied.
RESOURCE_GROUP
: Name of an existing resource
group to associate with this user
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public CreateVideoResponse createVideo(CreateVideoRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public CreateVideoResponse createVideo(String attribute, String begin, double durationSeconds, String end, double framesPerSecond, String style, String path, String styleParameters, Map<String,String> options) throws GPUdbException
attribute
- The animated attribute to map to the video's frames.
Must be present in the LAYERS specified for the
visualization. This is often a time-related field but
may be any numeric type.begin
- The start point for the video. Accepts an expression
evaluable over the attribute
.durationSeconds
- Seconds of video to produceend
- The end point for the video. Accepts an expression evaluable
over the attribute
.framesPerSecond
- The presentation frame rate of the encoded video
in frames per second.style
- The name of the visualize mode; should correspond to the
schema used for the styleParameters
field.
Supported values:
path
- Fully-qualified KiFS path. Write access is required. A
file must not exist at that path, unless REPLACE_IF_EXISTS
is TRUE
.styleParameters
- A string containing the JSON-encoded visualize
request. Must correspond to the visualize mode
specified in the style
field.options
- Optional parameters.
TTL
: Sets the TTL of the video.
WINDOW
: Specified using the data-type
corresponding to the attribute
. For a
window of size W, a video frame rendered for
time t will visualize data in the interval
[t-W,t]. The minimum window size is the interval
between successive frames. The minimum value is
the default. If a value less than the minimum
value is specified, it is replaced with the
minimum window size. Larger values will make
changes throughout the video appear more smooth
while smaller values will capture fast
variations in the data.
NO_ERROR_IF_EXISTS
: If TRUE
, does not return an error if the video
already exists. Ignored if REPLACE_IF_EXISTS
is TRUE
.
Supported values:
The default value is FALSE
.
REPLACE_IF_EXISTS
: If TRUE
, deletes any existing video with the same
path before creating a new video.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteDirectoryResponse deleteDirectory(DeleteDirectoryRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteDirectoryResponse deleteDirectory(String directoryName, Map<String,String> options) throws GPUdbException
directoryName
- Name of the directory in KiFS to be deleted. The
directory must contain no files, unless RECURSIVE
is TRUE
options
- Optional parameters.
RECURSIVE
: If TRUE
, will delete directory and all files
residing in it. If false, directory must be
empty for deletion.
Supported values:
The default value is FALSE
.
NO_ERROR_IF_NOT_EXISTS
: If TRUE
, no error is returned if specified
directory does not exist.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteFilesResponse deleteFiles(DeleteFilesRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteFilesResponse deleteFiles(List<String> fileNames, Map<String,String> options) throws GPUdbException
fileNames
- An array of names of files to be deleted. File paths
may contain wildcard characters after the KiFS
directory delimeter. Accepted wildcard characters are
asterisk (*) to represent any string of zero or more
characters, and question mark (?) to indicate a single
character.options
- Optional parameters.
NO_ERROR_IF_NOT_EXISTS
: If TRUE
, no error is returned if a specified file
does not exist.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteGraphResponse deleteGraph(DeleteGraphRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteGraphResponse deleteGraph(String graphName, Map<String,String> options) throws GPUdbException
graphName
- Name of the graph to be deleted.options
- Optional parameters.
DELETE_PERSIST
: If set to TRUE
, the graph is removed from the server and
persist. If set to FALSE
, the graph is removed from the server but
is left in persist. The graph can be reloaded
from persist if it is recreated with the same
'graph_name'.
Supported values:
The default value is TRUE
.
SERVER_ID
: Indicates which graph server(s) to
send the request to. Default is to send to get
information about all the servers.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteProcResponse deleteProc(DeleteProcRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteProcResponse deleteProc(String procName, Map<String,String> options) throws GPUdbException
procName
- Name of the proc to be deleted. Must be the name of a
currently existing proc.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteRecordsResponse deleteRecords(DeleteRecordsRequest request) throws GPUdbException
expressions
(matching multiple records), a single record identified by RECORD_ID
options, or all records when using DELETE_ALL_RECORDS
. Note that the three selection criteria are
mutually exclusive. This operation cannot be run on a view. The
operation is synchronous meaning that a response will not be available
until the request is completely processed and all the matching records
are deleted.request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteRecordsResponse deleteRecords(String tableName, List<String> expressions, Map<String,String> options) throws GPUdbException
expressions
(matching multiple records), a single record identified by
RECORD_ID
options, or all records when using DELETE_ALL_RECORDS
. Note that the three selection criteria are
mutually exclusive. This operation cannot be run on a view. The
operation is synchronous meaning that a response will not be available
until the request is completely processed and all the matching records
are deleted.tableName
- Name of the table from which to delete records, in
[schema_name.]table_name format, using standard name resolution rules. Must contain
the name of an existing table; not applicable to
views.expressions
- A list of the actual predicates, one for each
select; format should follow the guidelines provided
here. Specifying one or more
expressions
is mutually exclusive to
specifying RECORD_ID
in the options
.options
- Optional parameters.
GLOBAL_EXPRESSION
: An optional global
expression to reduce the search space of the
expressions
. The default value is ''.
RECORD_ID
: A record ID identifying a single
record, obtained at the time of insertion
of the record
or by calling getRecordsFromCollection
with the
*return_record_ids* option. This option cannot
be used to delete records from replicated tables.
DELETE_ALL_RECORDS
: If set to TRUE
, all records in the table will be deleted.
If set to FALSE
, then the option is effectively ignored.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteResourceGroupResponse deleteResourceGroup(DeleteResourceGroupRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteResourceGroupResponse deleteResourceGroup(String name, Map<String,String> options) throws GPUdbException
name
- Name of the resource group to be deleted.options
- Optional parameters.
CASCADE_DELETE
: If TRUE
, delete any existing entities owned by
this group. Otherwise this request will return
an error of any such entities exist.
Supported values:
The default value is FALSE
.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteRoleResponse deleteRole(DeleteRoleRequest request) throws GPUdbException
Note: This method should be used for on-premise deployments only.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteRoleResponse deleteRole(String name, Map<String,String> options) throws GPUdbException
Note: This method should be used for on-premise deployments only.
name
- Name of the role to be deleted. Must be an existing role.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteUserResponse deleteUser(DeleteUserRequest request) throws GPUdbException
Note: This method should be used for on-premise deployments only.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DeleteUserResponse deleteUser(String name, Map<String,String> options) throws GPUdbException
Note: This method should be used for on-premise deployments only.
name
- Name of the user to be deleted. Must be an existing user.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DownloadFilesResponse downloadFiles(DownloadFilesRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DownloadFilesResponse downloadFiles(List<String> fileNames, List<Long> readOffsets, List<Long> readLengths, Map<String,String> options) throws GPUdbException
fileNames
- An array of the file names to download from KiFS. File
paths may contain wildcard characters after the KiFS
directory delimeter. Accepted wildcard characters are
asterisk (*) to represent any string of zero or more
characters, and question mark (?) to indicate a single
character.readOffsets
- An array of starting byte offsets from which to read
each respective file in fileNames
. Must
either be empty or the same length as fileNames
. If empty, files are downloaded in their
entirety. If not empty, readLengths
must
also not be empty.readLengths
- Array of number of bytes to read from each
respective file in fileNames
. Must either be
empty or the same length as fileNames
. If
empty, files are downloaded in their entirety. If
not empty, readOffsets
must also not be
empty.options
- Optional parameters.
FILE_ENCODING
: Encoding to be applied to the
output file data. When using JSON serialization
it is recommended to specify this as BASE64
.
Supported values:
BASE64
: Apply base64 encoding to the
output file data.
NONE
: Do not apply any encoding to the
output file data.
NONE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DropContainerRegistryResponse dropContainerRegistry(DropContainerRegistryRequest request) throws GPUdbException
GPUdbException
public DropContainerRegistryResponse dropContainerRegistry(String registryName, Map<String,String> options) throws GPUdbException
GPUdbException
public DropCredentialResponse dropCredential(DropCredentialRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DropCredentialResponse dropCredential(String credentialName, Map<String,String> options) throws GPUdbException
credentialName
- Name of the credential to be dropped. Must be an
existing credential.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DropDatasinkResponse dropDatasink(DropDatasinkRequest request) throws GPUdbException
By default, if any table monitors use this sink as a destination, the
request will be blocked unless option CLEAR_TABLE_MONITORS
is TRUE
.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DropDatasinkResponse dropDatasink(String name, Map<String,String> options) throws GPUdbException
By default, if any table monitors use this sink as a destination, the
request will be blocked unless option CLEAR_TABLE_MONITORS
is TRUE
.
name
- Name of the data sink to be dropped. Must be an existing
data sink.options
- Optional parameters.
CLEAR_TABLE_MONITORS
: If TRUE
, any table monitors that use this
data sink will be cleared.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DropDatasourceResponse dropDatasource(DropDatasourceRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DropDatasourceResponse dropDatasource(String name, Map<String,String> options) throws GPUdbException
name
- Name of the data source to be dropped. Must be an existing
data source.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DropEnvironmentResponse dropEnvironment(DropEnvironmentRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DropEnvironmentResponse dropEnvironment(String environmentName, Map<String,String> options) throws GPUdbException
environmentName
- Name of the environment to be dropped. Must be
an existing environment.options
- Optional parameters.
NO_ERROR_IF_NOT_EXISTS
: If TRUE
and if the environment specified in environmentName
does not exist, no error is
returned. If FALSE
and if the environment specified in
environmentName
does not exist, then an
error is returned.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DropModelResponse dropModel(DropModelRequest request) throws GPUdbException
GPUdbException
public DropModelResponse dropModel(String modelName, Map<String,String> options) throws GPUdbException
GPUdbException
public DropSchemaResponse dropSchema(DropSchemaRequest request) throws GPUdbException
schemaName
.request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public DropSchemaResponse dropSchema(String schemaName, Map<String,String> options) throws GPUdbException
schemaName
.schemaName
- Name of the schema to be dropped. Must be an existing
schema.options
- Optional parameters.
NO_ERROR_IF_NOT_EXISTS
: If TRUE
and if the schema specified in schemaName
does not exist, no error is
returned. If FALSE
and if the schema specified in schemaName
does not exist, then an error is
returned.
Supported values:
The default value is FALSE
.
CASCADE
: If TRUE
, all tables within the schema will be
dropped. If FALSE
, the schema will be dropped only if
empty.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public EvaluateModelResponse evaluateModel(EvaluateModelRequest request) throws GPUdbException
GPUdbException
public EvaluateModelResponse evaluateModel(String modelName, int replicas, String deploymentMode, String sourceTable, String destinationTable, Map<String,String> options) throws GPUdbException
GPUdbException
public ExecuteProcResponse executeProc(ExecuteProcRequest request) throws GPUdbException
If the proc being executed is distributed, inputTableNames
& inputColumnNames
may be passed to the proc to use for reading data, and
outputTableNames
may be passed to the proc to use for writing data.
If the proc being executed is non-distributed, these table parameters will be ignored.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ExecuteProcResponse executeProc(String procName, Map<String,String> params, Map<String,ByteBuffer> binParams, List<String> inputTableNames, Map<String,List<String>> inputColumnNames, List<String> outputTableNames, Map<String,String> options) throws GPUdbException
If the proc being executed is distributed, inputTableNames
&
inputColumnNames
may be passed to the proc to use for reading
data, and outputTableNames
may be passed to the proc to use for
writing data.
If the proc being executed is non-distributed, these table parameters will be ignored.
procName
- Name of the proc to execute. Must be the name of a
currently existing proc.params
- A map containing named parameters to pass to the proc.
Each key/value pair specifies the name of a parameter and
its value. The default value is an empty Map
.binParams
- A map containing named binary parameters to pass to
the proc. Each key/value pair specifies the name of a
parameter and its value. The default value is an empty
Map
.inputTableNames
- Names of the tables containing data to be passed
to the proc. Each name specified must be the
name of a currently existing table, in
[schema_name.]table_name format, using standard
name resolution rules. If no
table names are specified, no data will be
passed to the proc. This parameter is ignored
if the proc has a non-distributed execution
mode. The default value is an empty List
.inputColumnNames
- Map of table names from inputTableNames
to lists of names of columns from those tables
that will be passed to the proc. Each column
name specified must be the name of an existing
column in the corresponding table. If a table
name from inputTableNames
is not
included, all columns from that table will be
passed to the proc. This parameter is ignored
if the proc has a non-distributed execution
mode. The default value is an empty Map
.outputTableNames
- Names of the tables to which output data from
the proc will be written, each in
[schema_name.]table_name format, using standard
name resolution rules and
meeting table naming criteria. If a
specified table does not exist, it will
automatically be created with the same schema
as the corresponding table (by order) from
inputTableNames
, excluding any primary
and shard keys. If a specified table is a
non-persistent result table, it must not have
primary or shard keys. If no table names are
specified, no output data can be returned from
the proc. This parameter is ignored if the proc
has a non-distributed execution mode. The
default value is an empty List
.options
- Optional parameters.
CACHE_INPUT
: A comma-delimited list of table
names from inputTableNames
from which
input data will be cached for use in subsequent
calls to executeProc
with the
USE_CACHED_INPUT
option. Cached input data will
be retained until the proc status is cleared
with the clear_complete
option of showProcStatus
and all proc instances using the
cached data have completed. The default value is
''.
USE_CACHED_INPUT
: A comma-delimited list of run
IDs (as returned from prior calls to executeProc
) of running or completed
proc instances from which input data cached
using the CACHE_INPUT
option will be used. Cached input
data will not be used for any tables specified
in inputTableNames
, but data from all
other tables cached for the specified run IDs
will be passed to the proc. If the same table
was cached for multiple specified run IDs, the
cached data from the first run ID specified in
the list that includes that table will be used.
The default value is ''.
RUN_TAG
: A string that, if not empty, can be
used in subsequent calls to showProcStatus
or killProc
to identify the proc instance.
The default value is ''.
MAX_OUTPUT_LINES
: The maximum number of lines
of output from stdout and stderr to return via
showProcStatus
. If the number of lines output
exceeds the maximum, earlier lines are
discarded. The default value is '100'.
EXECUTE_AT_STARTUP
: If TRUE
, an instance of the proc will run when the
database is started instead of running
immediately. The runId
can be retrieved using showProc
and used
in showProcStatus
.
Supported values:
The default value is FALSE
.
EXECUTE_AT_STARTUP_AS
: Sets the alternate user
name to execute this proc instance as when
EXECUTE_AT_STARTUP
is TRUE
. The default value is ''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public RawExecuteSqlResponse executeSqlRaw(ExecuteSqlRequest request) throws GPUdbException
See SQL Support for the complete set of supported SQL commands.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ExecuteSqlResponse executeSql(ExecuteSqlRequest request) throws GPUdbException
See SQL Support for the complete set of supported SQL commands.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ExecuteSqlResponse executeSql(String statement, long offset, long limit, String requestSchemaStr, List<ByteBuffer> data, Map<String,String> options) throws GPUdbException
See SQL Support for the complete set of supported SQL commands.
statement
- SQL statement (query, DML, or DDL) to be executedoffset
- A positive integer indicating the number of initial
results to skip (this can be useful for paging through
the results). The default value is 0. The minimum allowed
value is 0. The maximum allowed value is MAX_INT.limit
- A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to indicate
that the maximum number of results allowed by the server
should be returned. The number of records returned will
never exceed the server's own limit, defined by the max_get_records_size parameter in the
server configuration. Use hasMoreRecords
to see if more records exist in the result
to be fetched, and offset
& limit
to
request subsequent pages of results. The default value is
-9999.requestSchemaStr
- Avro schema of data
. The default value
is ''.data
- An array of binary-encoded data for the records to be
binded to the SQL query. Or use QUERY_PARAMETERS
to pass the data in JSON format. The
default value is an empty List
.options
- Optional parameters.
COST_BASED_OPTIMIZATION
: If FALSE
, disables the cost-based optimization of
the given query.
Supported values:
The default value is FALSE
.
DISTRIBUTED_JOINS
: If TRUE
, enables the use of distributed joins in
servicing the given query. Any query requiring
a distributed join will succeed, though hints
can be used in the query to change the
distribution of the source data to allow the
query to succeed.
Supported values:
The default value is FALSE
.
DISTRIBUTED_OPERATIONS
: If TRUE
, enables the use of distributed operations
in servicing the given query. Any query
requiring a distributed join will succeed,
though hints can be used in the query to change
the distribution of the source data to allow the
query to succeed.
Supported values:
The default value is FALSE
.
IGNORE_EXISTING_PK
: Specifies the record
collision error-suppression policy for inserting
into or updating a table with a primary key, only used when
primary key record collisions are rejected
(UPDATE_ON_EXISTING_PK
is FALSE
). If set to TRUE
, any record insert/update that is rejected
for resulting in a primary key collision with an
existing table record will be ignored with no
error generated. If FALSE
, the rejection of any insert/update for
resulting in a primary key collision will cause
an error to be reported. If the specified table
does not have a primary key or if UPDATE_ON_EXISTING_PK
is TRUE
, then this option has no effect.
Supported values:
TRUE
: Ignore inserts/updates that
result in primary key collisions with
existing records
FALSE
: Treat as errors any
inserts/updates that result in primary
key collisions with existing records
FALSE
.
LATE_MATERIALIZATION
: If TRUE
, Joins/Filters results will always be
materialized ( saved to result tables format).
Supported values:
The default value is FALSE
.
PAGING_TABLE
: When empty or the specified
paging table not exists, the system will create
a paging table and return when query output has
more records than the user asked. If the paging
table exists in the system, the records from the
paging table are returned without evaluating the
query.
PAGING_TABLE_TTL
: Sets the TTL of the paging table.
PARALLEL_EXECUTION
: If FALSE
, disables the parallel step execution of
the given query.
Supported values:
The default value is TRUE
.
PLAN_CACHE
: If FALSE
, disables plan caching for the given
query.
Supported values:
The default value is TRUE
.
PREPARE_MODE
: If TRUE
, compiles a query into an execution plan
and saves it in query cache. Query execution is
not performed and an empty response will be
returned to user.
Supported values:
The default value is FALSE
.
PRESERVE_DICT_ENCODING
: If TRUE
, then columns that were dict encoded in
the source table will be dict encoded in the
projection table.
Supported values:
The default value is TRUE
.
QUERY_PARAMETERS
: Query parameters in JSON
array or arrays (for inserting multiple rows).
This can be used instead of data
and
requestSchemaStr
.
RESULTS_CACHING
: If FALSE
, disables caching of the results of the
given query.
Supported values:
The default value is TRUE
.
RULE_BASED_OPTIMIZATION
: If FALSE
, disables rule-based rewrite
optimizations for the given query.
Supported values:
The default value is TRUE
.
SSQ_OPTIMIZATION
: If FALSE
, scalar subqueries will be translated
into joins.
Supported values:
The default value is TRUE
.
TTL
: Sets the TTL of the intermediate result
tables used in query execution.
UPDATE_ON_EXISTING_PK
: Specifies the record
collision policy for inserting into or updating
a table with a primary key. If set to TRUE
, any existing table record with primary
key values that match those of a record being
inserted or updated will be replaced by that
record. If set to FALSE
, any such primary key collision will
result in the insert/update being rejected and
the error handled as determined by IGNORE_EXISTING_PK
. If the specified table
does not have a primary key, then this option
has no effect.
Supported values:
TRUE
: Replace the collided-into record
with the record inserted or updated when
a new/modified record causes a primary
key collision with an existing record
FALSE
: Reject the insert or update when
it results in a primary key collision
with an existing record
FALSE
.
VALIDATE_CHANGE_COLUMN
: When changing a column
using alter table, validate the change before
applying it. If TRUE
, then validate all values. A value too
large (or too long) for the new type will
prevent any change. If FALSE
, then when a value is too large or long,
it will be truncated.
Supported values:
The default value is TRUE
.
CURRENT_SCHEMA
: Use the supplied value as the
default schema when processing
this SQL command.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ExportQueryMetricsResponse exportQueryMetrics(ExportQueryMetricsRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ExportQueryMetricsResponse exportQueryMetrics(Map<String,String> options) throws GPUdbException
options
- Optional parameters.
EXPRESSION
: Filter for multi query export
FILEPATH
: Path to export target specified as a
filename or existing directory.
FORMAT
: Specifies which format to export the
metrics.
Supported values:
JSON
: Generic json output
JSON_TRACE_EVENT
: Chromium/Perfetto
trace event format
JSON
.
JOB_ID
: Export query metrics for the currently
running job
LIMIT
: Record limit per file for multi query
export
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ExportRecordsToFilesResponse exportRecordsToFiles(ExportRecordsToFilesRequest request) throws GPUdbException
COLUMNS_TO_EXPORT
and COLUMNS_TO_SKIP
). Additional filtering can be applied when using export
table with expression through SQL. Default destination is KIFS, though
other storage types (Azure, S3, GCS, and HDFS) are supported through
DATASINK_NAME
; see createDatasink
.
Server's local file system is not supported. Default file format is delimited text. See options for different file types and different options for each file type. Table is saved to a single file if within max file size limits (may vary depending on datasink type). If not, then table is split into multiple files; these may be smaller than the max size limit.
All filenames created are returned in the response.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ExportRecordsToFilesResponse exportRecordsToFiles(String tableName, String filepath, Map<String,String> options) throws GPUdbException
COLUMNS_TO_EXPORT
and COLUMNS_TO_SKIP
). Additional filtering can be applied when using export
table with expression through SQL. Default destination is KIFS, though
other storage types (Azure, S3, GCS, and HDFS) are supported through
DATASINK_NAME
; see createDatasink
.
Server's local file system is not supported. Default file format is delimited text. See options for different file types and different options for each file type. Table is saved to a single file if within max file size limits (may vary depending on datasink type). If not, then table is split into multiple files; these may be smaller than the max size limit.
All filenames created are returned in the response.
tableName
- filepath
- Path to data export target. If filepath
has a
file extension, it is read as the name of a file. If
filepath
is a directory, then the source table
name with a random UUID appended will be used as the
name of each exported file, all written to that
directory. If filepath is a filename, then all exported
files will have a random UUID appended to the given
name. In either case, the target directory specified
or implied must exist. The names of all exported files
are returned in the response.options
- Optional parameters.
BATCH_SIZE
: Number of records to be exported as
a batch. The default value is '1000000'.
COLUMN_FORMATS
: For each source column
specified, applies the column-property-bound
format. Currently supported column properties
include date, time, & datetime. The parameter
value must be formatted as a JSON string of maps
of column names to maps of column properties to
their corresponding column formats, e.g., '{
"order_date" : { "date" : "%Y.%m.%d" },
"order_time" : { "time" : "%H:%M:%S" } }'. See
DEFAULT_COLUMN_FORMATS
for valid format syntax.
COLUMNS_TO_EXPORT
: Specifies a comma-delimited
list of columns from the source table to export,
written to the output file in the order they are
given. Column names can be provided, in which
case the target file will use those names as the
column headers as well. Alternatively, column
numbers can be specified--discretely or as a
range. For example, a value of '5,7,1..3' will
write values from the fifth column in the source
table into the first column in the target file,
from the seventh column in the source table into
the second column in the target file, and from
the first through third columns in the source
table into the third through fifth columns in
the target file. Mutually exclusive with COLUMNS_TO_SKIP
.
COLUMNS_TO_SKIP
: Comma-separated list of column
names or column numbers to not export. All
columns in the source table not specified will
be written to the target file in the order they
appear in the table definition. Mutually
exclusive with COLUMNS_TO_EXPORT
.
DATASINK_NAME
: Datasink name, created using
createDatasink
.
DEFAULT_COLUMN_FORMATS
: Specifies the default
format to use to write data. Currently
supported column properties include date, time,
& datetime. This default column-property-bound
format can be overridden by specifying a column
property & format for a given source column in
COLUMN_FORMATS
. For each specified annotation,
the format will apply to all columns with that
annotation unless custom COLUMN_FORMATS
for that annotation are
specified. The parameter value must be
formatted as a JSON string that is a map of
column properties to their respective column
formats, e.g., '{ "date" : "%Y.%m.%d", "time" :
"%H:%M:%S" }'. Column formats are specified as
a string of control characters and plain text.
The supported control characters are 'Y', 'm',
'd', 'H', 'M', 'S', and 's', which follow the
Linux 'strptime()' specification, as well as
's', which specifies seconds and fractional
seconds (though the fractional component will be
truncated past milliseconds). Formats for the
'date' annotation must include the 'Y', 'm', and
'd' control characters. Formats for the 'time'
annotation must include the 'H', 'M', and either
'S' or 's' (but not both) control characters.
Formats for the 'datetime' annotation meet both
the 'date' and 'time' control character
requirements. For example, '{"datetime" :
"%m/%d/%Y %H:%M:%S" }' would be used to write
text as "05/04/2000 12:12:11"
EXPORT_DDL
: Save DDL to a separate file. The
default value is 'false'.
FILE_EXTENSION
: Extension to give the export
file. The default value is '.csv'.
FILE_TYPE
: Specifies the file format to use
when exporting data.
Supported values:
DELIMITED_TEXT
: Delimited text file
format; e.g., CSV, TSV, PSV, etc.
PARQUET
DELIMITED_TEXT
.
KINETICA_HEADER
: Whether to include a Kinetica
proprietary header. Will not be written if
TEXT_HAS_HEADER
is FALSE
.
Supported values:
The default value is FALSE
.
KINETICA_HEADER_DELIMITER
: If a Kinetica
proprietary header is included, then specify a
property separator. Different from column
delimiter. The default value is '|'.
COMPRESSION_TYPE
: File compression type. GZip
can be applied to text and Parquet files.
Snappy can only be applied to Parquet files, and
is the default compression for them.
Supported values:
SINGLE_FILE
: Save records to a single file.
This option may be ignored if file size exceeds
internal file size limits (this limit will
differ on different targets).
Supported values:
The default value is TRUE
.
SINGLE_FILE_MAX_SIZE
: Max file size (in MB) to
allow saving to a single file. May be overridden
by target limitations. The default value is ''.
TEXT_DELIMITER
: Specifies the character to
write out to delimit field values and field
names in the header (if present). For DELIMITED_TEXT
FILE_TYPE
only. The default value is ','.
TEXT_HAS_HEADER
: Indicates whether to write out
a header row. For DELIMITED_TEXT
FILE_TYPE
only.
Supported values:
The default value is TRUE
.
TEXT_NULL_STRING
: Specifies the character
string that should be written out for the null
value in the data. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '\N'.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ExportRecordsToTableResponse exportRecordsToTable(ExportRecordsToTableRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ExportRecordsToTableResponse exportRecordsToTable(String tableName, String remoteQuery, Map<String,String> options) throws GPUdbException
tableName
- Name of the table from which the data will be exported
to remote database, in [schema_name.]table_name
format, using standard name resolution rules.remoteQuery
- Parameterized insert query to export gpudb table
data into remote database. The default value is ''.options
- Optional parameters.
BATCH_SIZE
: Batch size, which determines how
many rows to export per round trip. The default
value is '200000'.
DATASINK_NAME
: Name of an existing external
data sink to which table name specified in
tableName
will be exported
JDBC_SESSION_INIT_STATEMENT
: Executes the
statement per each jdbc session before doing
actual load. The default value is ''.
JDBC_CONNECTION_INIT_STATEMENT
: Executes the
statement once before doing actual load. The
default value is ''.
REMOTE_TABLE
: Name of the target table to which
source table is exported. When this option is
specified remote_query cannot be specified. The
default value is ''.
USE_ST_GEOMFROM_CASTS
: Wraps parametrized
variables with st_geomfromtext or st_geomfromwkb
based on source column type.
Supported values:
The default value is FALSE
.
USE_INDEXED_PARAMETERS
: Uses $n style syntax
when generating insert query for remote_table
option.
Supported values:
The default value is FALSE
.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public FilterResponse filter(FilterRequest request) throws GPUdbException
viewName
.
For details see Expressions.
The response message contains the number of points for which the expression evaluated to be true, which is equivalent to the size of the result view.
request
- Request
object containing the
parameters for the operation.Response
object containing the results of
the operation.GPUdbException
- if an error occurs during the operation.public FilterResponse filter(String tableName, String viewName, String expression, Map<String,String> options) throws GPUdbException
viewName
.
For details see Expressions.
The response message contains the number of points for which the expression evaluated to be true, which is equivalent to the size of the result view.
tableName
- Name of the table to filter, in
[schema_name.]table_name format, using standard name resolution rules. This may be
the name of a table or a view (when chaining queries).viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.expression
- The select expression to filter the specified table.
For details see Expressions.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
is non-existent, it will be automatically
created.
VIEW_ID
: view this filtered-view is part of.
The default value is ''.
TTL
: Sets the TTL of the view specified in
viewName
.
Map
.Response
object containing the results of
the operation.GPUdbException
- if an error occurs during the operation.public FilterByAreaResponse filterByArea(FilterByAreaRequest request) throws GPUdbException
viewName
passed in
as part of the input.request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByAreaResponse filterByArea(String tableName, String viewName, String xColumnName, List<Double> xVector, String yColumnName, List<Double> yVector, Map<String,String> options) throws GPUdbException
viewName
passed in as
part of the input.tableName
- Name of the table to filter, in
[schema_name.]table_name format, using standard name resolution rules. This may be
the name of a table or a view (when chaining queries).viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.xColumnName
- Name of the column containing the x values to be
filtered.xVector
- List of x coordinates of the vertices of the polygon
representing the area to be filtered.yColumnName
- Name of the column containing the y values to be
filtered.yVector
- List of y coordinates of the vertices of the polygon
representing the area to be filtered.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
provided is non-existent, it will be
automatically created.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByAreaGeometryResponse filterByAreaGeometry(FilterByAreaGeometryRequest request) throws GPUdbException
viewName
passed in as part of the input.request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByAreaGeometryResponse filterByAreaGeometry(String tableName, String viewName, String columnName, List<Double> xVector, List<Double> yVector, Map<String,String> options) throws GPUdbException
viewName
passed in as part of the input.tableName
- Name of the table to filter, in
[schema_name.]table_name format, using standard name resolution rules. This may be
the name of a table or a view (when chaining queries).viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.columnName
- Name of the geospatial geometry column to be
filtered.xVector
- List of x coordinates of the vertices of the polygon
representing the area to be filtered.yVector
- List of y coordinates of the vertices of the polygon
representing the area to be filtered.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] The schema
for the newly created view. If the schema is
non-existent, it will be automatically created.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByBoxResponse filterByBox(FilterByBoxRequest request) throws GPUdbException
viewName
is
passed in as part of the input payload.request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByBoxResponse filterByBox(String tableName, String viewName, String xColumnName, double minX, double maxX, String yColumnName, double minY, double maxY, Map<String,String> options) throws GPUdbException
viewName
is passed in as part of the input payload.tableName
- Name of the table on which the bounding box operation
will be performed, in [schema_name.]table_name format,
using standard name resolution rules. Must be an
existing table.viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.xColumnName
- Name of the column on which to perform the bounding
box query. Must be a valid numeric column.minX
- Lower bound for the column chosen by xColumnName
.
Must be less than or equal to maxX
.maxX
- Upper bound for xColumnName
. Must be greater than
or equal to minX
.yColumnName
- Name of a column on which to perform the bounding
box query. Must be a valid numeric column.minY
- Lower bound for yColumnName
. Must be less than or
equal to maxY
.maxY
- Upper bound for yColumnName
. Must be greater than
or equal to minY
.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
is non-existent, it will be automatically
created.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByBoxGeometryResponse filterByBoxGeometry(FilterByBoxGeometryRequest request) throws GPUdbException
viewName
is
passed in as part of the input payload.request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByBoxGeometryResponse filterByBoxGeometry(String tableName, String viewName, String columnName, double minX, double maxX, double minY, double maxY, Map<String,String> options) throws GPUdbException
viewName
is passed in as part of the input
payload.tableName
- Name of the table on which the bounding box operation
will be performed, in [schema_name.]table_name format,
using standard name resolution rules. Must be an
existing table.viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.columnName
- Name of the geospatial geometry column to be
filtered.minX
- Lower bound for the x-coordinate of the rectangular box.
Must be less than or equal to maxX
.maxX
- Upper bound for the x-coordinate of the rectangular box.
Must be greater than or equal to minX
.minY
- Lower bound for the y-coordinate of the rectangular box.
Must be less than or equal to maxY
.maxY
- Upper bound for the y-coordinate of the rectangular box.
Must be greater than or equal to minY
.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
provided is non-existent, it will be
automatically created.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByGeometryResponse filterByGeometry(FilterByGeometryRequest request) throws GPUdbException
inputWkt
.request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByGeometryResponse filterByGeometry(String tableName, String viewName, String columnName, String inputWkt, String operation, Map<String,String> options) throws GPUdbException
inputWkt
.tableName
- Name of the table on which the filter by geometry will
be performed, in [schema_name.]table_name format,
using standard name resolution rules. Must be an
existing table or view containing a geospatial
geometry column.viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.columnName
- Name of the column to be used in the filter. Must be
a geospatial geometry column.inputWkt
- A geometry in WKT format that will be used to filter
the objects in tableName
. The default value is
''.operation
- The geometric filtering operation to perform.
Supported values:
CONTAINS
: Matches records that contain the
given WKT in inputWkt
, i.e. the given
WKT is within the bounds of a record's
geometry.
CROSSES
: Matches records that cross the given
WKT.
DISJOINT
: Matches records that are disjoint
from the given WKT.
EQUALS
: Matches records that are the same as
the given WKT.
INTERSECTS
: Matches records that intersect
the given WKT.
OVERLAPS
: Matches records that overlap the
given WKT.
TOUCHES
: Matches records that touch the given
WKT.
WITHIN
: Matches records that are within the
given WKT.
options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
provided is non-existent, it will be
automatically created.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByListResponse filterByList(FilterByListRequest request) throws GPUdbException
viewName
is passed
in as part of the request.
For example, if a type definition has the columns 'x' and 'y', then a filter by list query with the column map {"x":["10.1", "2.3"], "y":["0.0", "-31.5", "42.0"]} will return the count of all data points whose x and y values match both in the respective x- and y-lists, e.g., "x = 10.1 and y = 0.0", "x = 2.3 and y = -31.5", etc. However, a record with "x = 10.1 and y = -31.5" or "x = 2.3 and y = 0.0" would not be returned because the values in the given lists do not correspond.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByListResponse filterByList(String tableName, String viewName, Map<String,List<String>> columnValuesMap, Map<String,String> options) throws GPUdbException
viewName
is passed in as part of the request.
For example, if a type definition has the columns 'x' and 'y', then a filter by list query with the column map {"x":["10.1", "2.3"], "y":["0.0", "-31.5", "42.0"]} will return the count of all data points whose x and y values match both in the respective x- and y-lists, e.g., "x = 10.1 and y = 0.0", "x = 2.3 and y = -31.5", etc. However, a record with "x = 10.1 and y = -31.5" or "x = 2.3 and y = 0.0" would not be returned because the values in the given lists do not correspond.
tableName
- Name of the table to filter, in
[schema_name.]table_name format, using standard name resolution rules. This may be
the name of a table or a view (when chaining queries).viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.columnValuesMap
- List of values for the corresponding column in
the tableoptions
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
provided is non-existent, it will be
automatically created.
FILTER_MODE
: String indicating the filter mode,
either 'in_list' or 'not_in_list'.
Supported values:
IN_LIST
: The filter will match all
items that are in the provided list(s).
NOT_IN_LIST
: The filter will match all
items that are not in the provided
list(s).
IN_LIST
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByRadiusResponse filterByRadius(FilterByRadiusRequest request) throws GPUdbException
viewName
is
passed in as part of the request.
For track data, all track points that lie within the circle plus one point on either side of the circle (if the track goes beyond the circle) will be included in the result.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByRadiusResponse filterByRadius(String tableName, String viewName, String xColumnName, double xCenter, String yColumnName, double yCenter, double radius, Map<String,String> options) throws GPUdbException
viewName
is passed in as part of the request.
For track data, all track points that lie within the circle plus one point on either side of the circle (if the track goes beyond the circle) will be included in the result.
tableName
- Name of the table on which the filter by radius
operation will be performed, in
[schema_name.]table_name format, using standard name resolution rules. Must be an
existing table.viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.xColumnName
- Name of the column to be used for the x-coordinate
(the longitude) of the center.xCenter
- Value of the longitude of the center. Must be within
[-180.0, 180.0]. The minimum allowed value is -180. The
maximum allowed value is 180.yColumnName
- Name of the column to be used for the
y-coordinate-the latitude-of the center.yCenter
- Value of the latitude of the center. Must be within
[-90.0, 90.0]. The minimum allowed value is -90. The
maximum allowed value is 90.radius
- The radius of the circle within which the search will be
performed. Must be a non-zero positive value. It is in
meters; so, for example, a value of '42000' means 42 km.
The minimum allowed value is 0. The maximum allowed value
is MAX_INT.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema which is to contain the newly created
view. If the schema is non-existent, it will be
automatically created.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByRadiusGeometryResponse filterByRadiusGeometry(FilterByRadiusGeometryRequest request) throws GPUdbException
viewName
is passed in as part of the request.request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByRadiusGeometryResponse filterByRadiusGeometry(String tableName, String viewName, String columnName, double xCenter, double yCenter, double radius, Map<String,String> options) throws GPUdbException
viewName
is passed in as part of the request.tableName
- Name of the table on which the filter by radius
operation will be performed, in
[schema_name.]table_name format, using standard name resolution rules. Must be an
existing table.viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.columnName
- Name of the geospatial geometry column to be
filtered.xCenter
- Value of the longitude of the center. Must be within
[-180.0, 180.0]. The minimum allowed value is -180. The
maximum allowed value is 180.yCenter
- Value of the latitude of the center. Must be within
[-90.0, 90.0]. The minimum allowed value is -90. The
maximum allowed value is 90.radius
- The radius of the circle within which the search will be
performed. Must be a non-zero positive value. It is in
meters; so, for example, a value of '42000' means 42 km.
The minimum allowed value is 0. The maximum allowed value
is MAX_INT.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
provided is non-existent, it will be
automatically created.
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByRangeResponse filterByRange(FilterByRangeRequest request) throws GPUdbException
tableName
is
added to the view viewName
if its
column is within [lowerBound
,
upperBound
] (inclusive). The operation is synchronous. The response
provides a count of the number of objects which passed the bound filter.
Although this functionality can also be accomplished with the standard
filter function, it is more efficient.
For track objects, the count reflects how many points fall within the given bounds (which may not include all the track points of any given track).
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByRangeResponse filterByRange(String tableName, String viewName, String columnName, double lowerBound, double upperBound, Map<String,String> options) throws GPUdbException
tableName
is added to the view viewName
if its column is within [lowerBound
, upperBound
] (inclusive). The operation is
synchronous. The response provides a count of the number of objects
which passed the bound filter. Although this functionality can also be
accomplished with the standard filter function, it is more efficient.
For track objects, the count reflects how many points fall within the given bounds (which may not include all the track points of any given track).
tableName
- Name of the table on which the filter by range
operation will be performed, in
[schema_name.]table_name format, using standard name resolution rules. Must be an
existing table.viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.columnName
- Name of a column on which the operation would be
applied.lowerBound
- Value of the lower bound (inclusive).upperBound
- Value of the upper bound (inclusive).options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
is non-existent, it will be automatically
created.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterBySeriesResponse filterBySeries(FilterBySeriesRequest request) throws GPUdbException
This operation is synchronous, meaning that a response will not be returned until all the objects are fully available.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterBySeriesResponse filterBySeries(String tableName, String viewName, String trackId, List<String> targetTrackIds, Map<String,String> options) throws GPUdbException
This operation is synchronous, meaning that a response will not be returned until all the objects are fully available.
tableName
- Name of the table on which the filter by track
operation will be performed, in
[schema_name.]table_name format, using standard name resolution rules. Must be a
currently existing table with a track present.viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.trackId
- The ID of the track which will act as the filtering
points. Must be an existing track within the given
table.targetTrackIds
- Up to one track ID to intersect with the "filter"
track. If any provided, it must be an valid track
ID within the given set.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
is non-existent, it will be automatically
created.
SPATIAL_RADIUS
: A positive number passed as a
string representing the radius of the search
area centered around each track point's
geospatial coordinates. The value is interpreted
in meters. Required parameter. The minimum
allowed value is '0'.
TIME_RADIUS
: A positive number passed as a
string representing the maximum allowable time
difference between the timestamps of a filtered
object and the given track's points. The value
is interpreted in seconds. Required parameter.
The minimum allowed value is '0'.
SPATIAL_DISTANCE_METRIC
: A string representing
the coordinate system to use for the spatial
search criteria. Acceptable values are
'euclidean' and 'great_circle'. Optional
parameter; default is 'euclidean'.
Supported values:
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByStringResponse filterByString(FilterByStringRequest request) throws GPUdbException
CASE_SENSITIVE
can modify case sensitivity in matching for all modes
except SEARCH
. For SEARCH
mode details and limitations, see Full Text
Search.request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByStringResponse filterByString(String tableName, String viewName, String expression, String mode, List<String> columnNames, Map<String,String> options) throws GPUdbException
CASE_SENSITIVE
can modify case sensitivity in matching for all modes
except SEARCH
. For SEARCH
mode details and limitations, see Full Text
Search.tableName
- Name of the table on which the filter operation will
be performed, in [schema_name.]table_name format,
using standard name resolution rules. Must be an
existing table or view.viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.expression
- The expression with which to filter the table.mode
- The string filtering mode to apply. See below for details.
Supported values:
SEARCH
: Full text search query with wildcards and
boolean operators. Note that for this mode, no
column can be specified in columnNames
; all
string columns of the table that have text search
enabled will be searched.
EQUALS
: Exact whole-string match (accelerated).
CONTAINS
: Partial substring match (not
accelerated). If the column is a string type
(non-charN) and the number of records is too large,
it will return 0.
STARTS_WITH
: Strings that start with the given
expression (not accelerated). If the column is a
string type (non-charN) and the number of records
is too large, it will return 0.
REGEX
: Full regular expression search (not
accelerated). If the column is a string type
(non-charN) and the number of records is too large,
it will return 0.
columnNames
- List of columns on which to apply the filter.
Ignored for SEARCH
mode.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
is non-existent, it will be automatically
created.
CASE_SENSITIVE
: If FALSE
then string filtering will ignore case.
Does not apply to SEARCH
mode.
Supported values:
The default value is TRUE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByTableResponse filterByTable(FilterByTableRequest request) throws GPUdbException
viewName
is
specified, then the filtered objects will then be put in a newly created
view. The operation is synchronous, meaning that a response will not be
returned until all objects are fully available in the result view. The
return value contains the count (i.e. the size) of the resulting
view.request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByTableResponse filterByTable(String tableName, String viewName, String columnName, String sourceTableName, String sourceTableColumnName, Map<String,String> options) throws GPUdbException
viewName
is specified, then the filtered objects
will then be put in a newly created view. The operation is synchronous,
meaning that a response will not be returned until all objects are fully
available in the result view. The return value contains the count
(i.e. the size) of the resulting view.tableName
- Name of the table whose data will be filtered, in
[schema_name.]table_name format, using standard name resolution rules. Must be an
existing table.viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.columnName
- Name of the column by whose value the data will be
filtered from the table designated by tableName
.sourceTableName
- Name of the table whose data will be compared
against in the table called tableName
,
in [schema_name.]table_name format, using
standard name resolution rules. Must
be an existing table.sourceTableColumnName
- Name of the column in the sourceTableName
whose values will be used
as the filter for table tableName
.
Must be a geospatial geometry column if in
'spatial' mode; otherwise, Must match the
type of the columnName
.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
is non-existent, it will be automatically
created.
FILTER_MODE
: String indicating the filter mode,
either IN_TABLE
or NOT_IN_TABLE
.
Supported values:
The default value is IN_TABLE
.
MODE
: Mode - should be either SPATIAL
or NORMAL
.
Supported values:
The default value is NORMAL
.
BUFFER
: Buffer size, in meters. Only relevant
for SPATIAL
mode. The default value is '0'.
BUFFER_METHOD
: Method used to buffer polygons.
Only relevant for SPATIAL
mode.
Supported values:
The default value is NORMAL
.
MAX_PARTITION_SIZE
: Maximum number of points in
a partition. Only relevant for SPATIAL
mode. The default value is '0'.
MAX_PARTITION_SCORE
: Maximum number of points *
edges in a partition. Only relevant for SPATIAL
mode. The default value is '8000000'.
X_COLUMN_NAME
: Name of column containing x
value of point being filtered in SPATIAL
mode. The default value is 'x'.
Y_COLUMN_NAME
: Name of column containing y
value of point being filtered in SPATIAL
mode. The default value is 'y'.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByValueResponse filterByValue(FilterByValueRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public FilterByValueResponse filterByValue(String tableName, String viewName, boolean isString, double value, String valueStr, String columnName, Map<String,String> options) throws GPUdbException
tableName
- Name of an existing table on which to perform the
calculation, in [schema_name.]table_name format, using
standard name resolution rules.viewName
- If provided, then this will be the name of the view
containing the results, in [schema_name.]view_name
format, using standard name resolution rules and meeting table naming criteria. Must not be
an already existing table or view. The default value is
''.isString
- Indicates whether the value being searched for is
string or numeric.value
- The value to search for. The default value is 0.valueStr
- The string value to search for. The default value is
''.columnName
- Name of a column on which the filter by value would
be applied.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of viewName
. This is always
allowed even if the caller does not have
permission to create tables. The generated name
is returned in QUALIFIED_VIEW_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the view as part of
viewName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created view. If the schema
is non-existent, it will be automatically
created.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public GetJobResponse getJob(GetJobRequest request) throws GPUdbException
createJob
for starting an
asynchronous job. Some fields of the response are filled only after the
submitted job has finished execution.request
- Request
object containing the
parameters for the operation.Response
object containing the results of
the operation.GPUdbException
- if an error occurs during the operation.public GetJobResponse getJob(long jobId, Map<String,String> options) throws GPUdbException
createJob
for
starting an asynchronous job. Some fields of the response are filled
only after the submitted job has finished execution.jobId
- A unique identifier for the job whose status and result is
to be fetched.options
- Optional parameters.
JOB_TAG
: Job tag returned in call to create the
job
Map
.Response
object containing the results of
the operation.GPUdbException
- if an error occurs during the operation.public RawGetRecordsResponse getRecordsRaw(GetRecordsRequest request) throws GPUdbException
This operation supports paging through the data via the offset
and limit
parameters. Note
that when paging through a table, if the table (or the underlying table
in case of a view) is updated (records are inserted, deleted or
modified) the records retrieved may differ between calls based on the
updates applied.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsResponse<TResponse> getRecords(Object typeDescriptor, GetRecordsRequest request) throws GPUdbException
This operation supports paging through the data via the offset
and limit
parameters. Note
that when paging through a table, if the table (or the underlying table
in case of a view) is updated (records are inserted, deleted or
modified) the records retrieved may differ between calls based on the
updates applied.
TResponse
- The type of object being retrieved.typeDescriptor
- Type descriptor used for decoding returned
objects.request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.IllegalArgumentException
- if typeDescriptor
is not an
instance of one of the following:
Type
, TypeObjectMap
,
Schema
, or a
Class
that implements IndexedRecord
GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsResponse<TResponse> getRecords(Object typeDescriptor, String tableName, long offset, long limit, Map<String,String> options) throws GPUdbException
This operation supports paging through the data via the offset
and limit
parameters. Note that when paging through a table, if
the table (or the underlying table in case of a view) is updated
(records are inserted, deleted or modified) the records retrieved may
differ between calls based on the updates applied.
TResponse
- The type of object being retrieved.typeDescriptor
- Type descriptor used for decoding returned
objects.tableName
- Name of the table or view from which the records will
be fetched, in [schema_name.]table_name format, using
standard name resolution rules.offset
- A positive integer indicating the number of initial
results to skip (this can be useful for paging through
the results). The default value is 0. The minimum allowed
value is 0. The maximum allowed value is MAX_INT.limit
- A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to indicate
that the maximum number of results allowed by the server
should be returned. The number of records returned will
never exceed the server's own limit, defined by the max_get_records_size parameter in the
server configuration. Use hasMoreRecords
to see if more records exist in the result
to be fetched, and offset
& limit
to
request subsequent pages of results. The default value is
-9999.options
- EXPRESSION
: Optional filter expression to apply
to the table.
FAST_INDEX_LOOKUP
: Indicates if indexes should
be used to perform the lookup for a given
expression if possible. Only applicable if there
is no sorting, the expression contains only
equivalence comparisons based on existing tables
indexes and the range of requested values is
from [0 to END_OF_SET].
Supported values:
The default value is TRUE
.
SORT_BY
: Optional column that the data should
be sorted by. Empty by default (i.e. no sorting
is applied).
SORT_ORDER
: String indicating how the returned
values should be sorted - ascending or
descending. If sort_order is provided, sort_by
has to be provided.
Supported values:
The default value is ASCENDING
.
Map
.Response
object containing the
results of the operation.IllegalArgumentException
- if typeDescriptor
is not an
instance of one of the following:
Type
, TypeObjectMap
,
Schema
, or a
Class
that implements IndexedRecord
GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsResponse<TResponse> getRecords(GetRecordsRequest request) throws GPUdbException
This operation supports paging through the data via the offset
and limit
parameters. Note
that when paging through a table, if the table (or the underlying table
in case of a view) is updated (records are inserted, deleted or
modified) the records retrieved may differ between calls based on the
updates applied.
TResponse
- The type of object being retrieved.request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsResponse<TResponse> getRecords(String tableName, long offset, long limit, Map<String,String> options) throws GPUdbException
This operation supports paging through the data via the offset
and limit
parameters. Note that when paging through a table, if
the table (or the underlying table in case of a view) is updated
(records are inserted, deleted or modified) the records retrieved may
differ between calls based on the updates applied.
TResponse
- The type of object being retrieved.tableName
- Name of the table or view from which the records will
be fetched, in [schema_name.]table_name format, using
standard name resolution rules.offset
- A positive integer indicating the number of initial
results to skip (this can be useful for paging through
the results). The default value is 0. The minimum allowed
value is 0. The maximum allowed value is MAX_INT.limit
- A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to indicate
that the maximum number of results allowed by the server
should be returned. The number of records returned will
never exceed the server's own limit, defined by the max_get_records_size parameter in the
server configuration. Use hasMoreRecords
to see if more records exist in the result
to be fetched, and offset
& limit
to
request subsequent pages of results. The default value is
-9999.options
- EXPRESSION
: Optional filter expression to apply
to the table.
FAST_INDEX_LOOKUP
: Indicates if indexes should
be used to perform the lookup for a given
expression if possible. Only applicable if there
is no sorting, the expression contains only
equivalence comparisons based on existing tables
indexes and the range of requested values is
from [0 to END_OF_SET].
Supported values:
The default value is TRUE
.
SORT_BY
: Optional column that the data should
be sorted by. Empty by default (i.e. no sorting
is applied).
SORT_ORDER
: String indicating how the returned
values should be sorted - ascending or
descending. If sort_order is provided, sort_by
has to be provided.
Supported values:
The default value is ASCENDING
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public RawGetRecordsByColumnResponse getRecordsByColumnRaw(GetRecordsByColumnRequest request) throws GPUdbException
offset
and
limit
parameters.
Window
functions, which can perform operations like moving averages, are
available through this endpoint as well as createProjection
.
When using pagination, if the table (or the underlying table in the case of a view) is modified (records are inserted, updated, or deleted) during a call to the endpoint, the records or values retrieved may differ between calls based on the type of the update, e.g., the contiguity across pages cannot be relied upon.
If tableName
is empty, selection is performed against a single-row virtual
table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public GetRecordsByColumnResponse getRecordsByColumn(GetRecordsByColumnRequest request) throws GPUdbException
offset
and
limit
parameters.
Window
functions, which can perform operations like moving averages, are
available through this endpoint as well as createProjection
.
When using pagination, if the table (or the underlying table in the case of a view) is modified (records are inserted, updated, or deleted) during a call to the endpoint, the records or values retrieved may differ between calls based on the type of the update, e.g., the contiguity across pages cannot be relied upon.
If tableName
is empty, selection is performed against a single-row virtual
table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public GetRecordsByColumnResponse getRecordsByColumn(String tableName, List<String> columnNames, long offset, long limit, Map<String,String> options) throws GPUdbException
offset
and limit
parameters.
Window
functions, which can perform operations like moving averages, are
available through this endpoint as well as createProjection
.
When using pagination, if the table (or the underlying table in the case of a view) is modified (records are inserted, updated, or deleted) during a call to the endpoint, the records or values retrieved may differ between calls based on the type of the update, e.g., the contiguity across pages cannot be relied upon.
If tableName
is empty, selection is performed against a
single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
tableName
- Name of the table or view on which this operation will
be performed, in [schema_name.]table_name format,
using standard name resolution rules. An empty
table name retrieves one record from a single-row
virtual table, where columns specified should be
constants or constant expressions.columnNames
- The list of column values to retrieve.offset
- A positive integer indicating the number of initial
results to skip (this can be useful for paging through
the results). The default value is 0. The minimum allowed
value is 0. The maximum allowed value is MAX_INT.limit
- A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to indicate
that the maximum number of results allowed by the server
should be returned. The number of records returned will
never exceed the server's own limit, defined by the max_get_records_size parameter in the
server configuration. Use hasMoreRecords
to see if more records exist in the result
to be fetched, and offset
& limit
to
request subsequent pages of results. The default value is
-9999.options
- EXPRESSION
: Optional filter expression to apply
to the table.
SORT_BY
: Optional column that the data should
be sorted by. Used in conjunction with SORT_ORDER
. The ORDER_BY
option can be used in lieu of SORT_BY
/ SORT_ORDER
. The default value is ''.
SORT_ORDER
: String indicating how the returned
values should be sorted - ASCENDING
or DESCENDING
. If SORT_ORDER
is provided, SORT_BY
has to be provided.
Supported values:
The default value is ASCENDING
.
ORDER_BY
: Comma-separated list of the columns
to be sorted by as well as the sort direction,
e.g., 'timestamp asc, x desc'. The default value
is ''.
CONVERT_WKTS_TO_WKBS
: If TRUE
, then WKT string columns will be returned
as WKB bytes.
Supported values:
The default value is FALSE
.
ROUTE_TO_TOM
: For multihead record retrieval
without shard key expression - specifies from
which tom to retrieve data.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public RawGetRecordsBySeriesResponse getRecordsBySeriesRaw(GetRecordsBySeriesRequest request) throws GPUdbException
worldTableName
based on the partial track information contained in the
tableName
.
This operation supports paging through the data via the offset
and
limit
parameters.
In contrast to getRecordsRaw
this returns records grouped by series/track. So if
offset
is 0 and limit
is 5 this operation would return the first 5 series/tracks in
tableName
. Each series/track will be returned sorted by their TIMESTAMP
column.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsBySeriesResponse<TResponse> getRecordsBySeries(Object typeDescriptor, GetRecordsBySeriesRequest request) throws GPUdbException
worldTableName
based on the partial track information contained in the
tableName
.
This operation supports paging through the data via the offset
and
limit
parameters.
In contrast to getRecords
this returns records grouped by series/track. So if offset
is 0
and limit
is 5 this operation would return the first 5 series/tracks in
tableName
. Each series/track will be returned sorted by their TIMESTAMP
column.
TResponse
- The type of object being retrieved.typeDescriptor
- Type descriptor used for decoding returned
objects.request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.IllegalArgumentException
- if typeDescriptor
is not an
instance of one of the following:
Type
, TypeObjectMap
,
Schema
, or a
Class
that implements IndexedRecord
GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsBySeriesResponse<TResponse> getRecordsBySeries(Object typeDescriptor, String tableName, String worldTableName, int offset, int limit, Map<String,String> options) throws GPUdbException
worldTableName
based on the partial track information contained in the
tableName
.
This operation supports paging through the data via the offset
and limit
parameters.
In contrast to getRecords
this returns records grouped by series/track. So if offset
is 0 and limit
is 5 this operation would return the
first 5 series/tracks in tableName
. Each series/track will be
returned sorted by their TIMESTAMP column.
TResponse
- The type of object being retrieved.typeDescriptor
- Type descriptor used for decoding returned
objects.tableName
- Name of the table or view for which series/tracks will
be fetched, in [schema_name.]table_name format, using
standard name resolution rules.worldTableName
- Name of the table containing the complete
series/track information to be returned for the
tracks present in the tableName
, in
[schema_name.]table_name format, using standard
name resolution rules.
Typically this is used when retrieving
series/tracks from a view (which contains partial
series/tracks) but the user wants to retrieve the
entire original series/tracks. Can be blank.offset
- A positive integer indicating the number of initial
series/tracks to skip (useful for paging through the
results). The default value is 0. The minimum allowed
value is 0. The maximum allowed value is MAX_INT.limit
- A positive integer indicating the maximum number of
series/tracks to be returned. Or END_OF_SET (-9999) to
indicate that the max number of results should be
returned. The default value is 250.options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.IllegalArgumentException
- if typeDescriptor
is not an
instance of one of the following:
Type
, TypeObjectMap
,
Schema
, or a
Class
that implements IndexedRecord
GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsBySeriesResponse<TResponse> getRecordsBySeries(GetRecordsBySeriesRequest request) throws GPUdbException
worldTableName
based on the partial track information contained in the
tableName
.
This operation supports paging through the data via the offset
and
limit
parameters.
In contrast to getRecords
this returns records grouped by series/track. So if offset
is 0
and limit
is 5 this operation would return the first 5 series/tracks in
tableName
. Each series/track will be returned sorted by their TIMESTAMP
column.
TResponse
- The type of object being retrieved.request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsBySeriesResponse<TResponse> getRecordsBySeries(String tableName, String worldTableName, int offset, int limit, Map<String,String> options) throws GPUdbException
worldTableName
based on the partial track information contained in the
tableName
.
This operation supports paging through the data via the offset
and limit
parameters.
In contrast to getRecords
this returns records grouped by series/track. So if offset
is 0 and limit
is 5 this operation would return the
first 5 series/tracks in tableName
. Each series/track will be
returned sorted by their TIMESTAMP column.
TResponse
- The type of object being retrieved.tableName
- Name of the table or view for which series/tracks will
be fetched, in [schema_name.]table_name format, using
standard name resolution rules.worldTableName
- Name of the table containing the complete
series/track information to be returned for the
tracks present in the tableName
, in
[schema_name.]table_name format, using standard
name resolution rules.
Typically this is used when retrieving
series/tracks from a view (which contains partial
series/tracks) but the user wants to retrieve the
entire original series/tracks. Can be blank.offset
- A positive integer indicating the number of initial
series/tracks to skip (useful for paging through the
results). The default value is 0. The minimum allowed
value is 0. The maximum allowed value is MAX_INT.limit
- A positive integer indicating the maximum number of
series/tracks to be returned. Or END_OF_SET (-9999) to
indicate that the max number of results should be
returned. The default value is 250.options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public RawGetRecordsFromCollectionResponse getRecordsFromCollectionRaw(GetRecordsFromCollectionRequest request) throws GPUdbException
deleteRecords
.
This operation supports paging through the data via the offset
and limit
parameters.
Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsFromCollectionResponse<TResponse> getRecordsFromCollection(Object typeDescriptor, GetRecordsFromCollectionRequest request) throws GPUdbException
deleteRecords
.
This operation supports paging through the data via the offset
and limit
parameters.
Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)
TResponse
- The type of object being retrieved.typeDescriptor
- Type descriptor used for decoding returned
objects.request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.IllegalArgumentException
- if typeDescriptor
is not an
instance of one of the following:
Type
, TypeObjectMap
,
Schema
, or a
Class
that implements IndexedRecord
GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsFromCollectionResponse<TResponse> getRecordsFromCollection(Object typeDescriptor, String tableName, long offset, long limit, Map<String,String> options) throws GPUdbException
deleteRecords
.
This operation supports paging through the data via the offset
and limit
parameters.
Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)
TResponse
- The type of object being retrieved.typeDescriptor
- Type descriptor used for decoding returned
objects.tableName
- Name of the collection or table from which records are
to be retrieved, in [schema_name.]table_name format,
using standard name resolution rules. Must be an
existing collection or table.offset
- A positive integer indicating the number of initial
results to skip (this can be useful for paging through
the results). The default value is 0. The minimum allowed
value is 0. The maximum allowed value is MAX_INT.limit
- A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to indicate
that the maximum number of results allowed by the server
should be returned. The number of records returned will
never exceed the server's own limit, defined by the max_get_records_size parameter in the
server configuration. Use offset
& limit
to request subsequent pages of results. The default value
is -9999.options
- RETURN_RECORD_IDS
: If TRUE
then return the internal record ID along
with each returned record.
Supported values:
The default value is FALSE
.
EXPRESSION
: Optional filter expression to apply
to the table. The default value is ''.
Map
.Response
object
containing the results of the operation.IllegalArgumentException
- if typeDescriptor
is not an
instance of one of the following:
Type
, TypeObjectMap
,
Schema
, or a
Class
that implements IndexedRecord
GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsFromCollectionResponse<TResponse> getRecordsFromCollection(GetRecordsFromCollectionRequest request) throws GPUdbException
deleteRecords
.
This operation supports paging through the data via the offset
and limit
parameters.
Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)
TResponse
- The type of object being retrieved.request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public <TResponse> GetRecordsFromCollectionResponse<TResponse> getRecordsFromCollection(String tableName, long offset, long limit, Map<String,String> options) throws GPUdbException
deleteRecords
.
This operation supports paging through the data via the offset
and limit
parameters.
Note that when using the Java API, it is not possible to retrieve records from join views using this operation. (DEPRECATED)
TResponse
- The type of object being retrieved.tableName
- Name of the collection or table from which records are
to be retrieved, in [schema_name.]table_name format,
using standard name resolution rules. Must be an
existing collection or table.offset
- A positive integer indicating the number of initial
results to skip (this can be useful for paging through
the results). The default value is 0. The minimum allowed
value is 0. The maximum allowed value is MAX_INT.limit
- A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to indicate
that the maximum number of results allowed by the server
should be returned. The number of records returned will
never exceed the server's own limit, defined by the max_get_records_size parameter in the
server configuration. Use offset
& limit
to request subsequent pages of results. The default value
is -9999.options
- RETURN_RECORD_IDS
: If TRUE
then return the internal record ID along
with each returned record.
Supported values:
The default value is FALSE
.
EXPRESSION
: Optional filter expression to apply
to the table. The default value is ''.
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public GetVectortileResponse getVectortile(GetVectortileRequest request) throws GPUdbException
GPUdbException
public GetVectortileResponse getVectortile(List<String> tableNames, List<String> columnNames, Map<String,List<String>> layers, int tileX, int tileY, int zoom, Map<String,String> options) throws GPUdbException
GPUdbException
public GrantPermissionResponse grantPermission(GrantPermissionRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionResponse grantPermission(String principal, String object, String objectType, String permission, Map<String,String> options) throws GPUdbException
principal
- Name of the user or role for which the permission is
being granted. Must be an existing user or role. The
default value is ''.object
- Name of object permission is being granted to. It is
recommended to use a fully-qualified name when possible.objectType
- The type of object being granted to.
Supported values:
CONTEXT
: Context
CREDENTIAL
: Credential
DATASINK
: Data Sink
DATASOURCE
: Data Source
DIRECTORY
: KIFS File Directory
GRAPH
: A Graph object
PROC
: UDF Procedure
SCHEMA
: Schema
SQL_PROC
: SQL Procedure
SYSTEM
: System-level access
TABLE
: Database Table
TABLE_MONITOR
: Table monitor
permission
- Permission being granted.
Supported values:
ADMIN
: Full read/write and administrative
access on the object.
CONNECT
: Connect access on the given data
source or data sink.
CREATE
: Ability to create new objects of
this type.
DELETE
: Delete rows from tables.
EXECUTE
: Ability to Execute the Procedure
object.
INSERT
: Insert access to tables.
READ
: Ability to read, list and use the
object.
UPDATE
: Update access to the table.
USER_ADMIN
: Access to administer users and
roles that do not have system_admin
permission.
WRITE
: Access to write, change and delete
objects.
options
- Optional parameters.
COLUMNS
: Apply table security to these columns,
comma-separated. The default value is ''.
FILTER_EXPRESSION
: Optional filter expression
to apply to this grant. Only rows that match
the filter will be affected. The default value
is ''.
WITH_GRANT_OPTION
: Allow the recipient to grant
the same permission (or subset) to others.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionCredentialResponse grantPermissionCredential(GrantPermissionCredentialRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionCredentialResponse grantPermissionCredential(String name, String permission, String credentialName, Map<String,String> options) throws GPUdbException
name
- Name of the user or role to which the permission will be
granted. Must be an existing user or role.permission
- Permission to grant to the user or role.
Supported values:
CREDENTIAL_ADMIN
: Full read/write and
administrative access on the credential.
CREDENTIAL_READ
: Ability to read and use the
credential.
credentialName
- Name of the credential on which the permission
will be granted. Must be an existing credential,
or an empty string to grant access on all
credentials.options
- Optional parameters. The default value is an empty
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionDatasourceResponse grantPermissionDatasource(GrantPermissionDatasourceRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionDatasourceResponse grantPermissionDatasource(String name, String permission, String datasourceName, Map<String,String> options) throws GPUdbException
name
- Name of the user or role to which the permission will be
granted. Must be an existing user or role.permission
- Permission to grant to the user or role.
Supported values:
datasourceName
- Name of the data source on which the permission
will be granted. Must be an existing data source,
or an empty string to grant permission on all
data sources.options
- Optional parameters. The default value is an empty
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionDirectoryResponse grantPermissionDirectory(GrantPermissionDirectoryRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionDirectoryResponse grantPermissionDirectory(String name, String permission, String directoryName, Map<String,String> options) throws GPUdbException
name
- Name of the user or role to which the permission will be
granted. Must be an existing user or role.permission
- Permission to grant to the user or role.
Supported values:
DIRECTORY_READ
: For files in the directory,
access to list files, download files, or use
files in server side functions
DIRECTORY_WRITE
: Access to upload files to,
or delete files from, the directory. A user
or role with write access automatically has
read access
directoryName
- Name of the KiFS directory to which the permission
grants access. An empty directory name grants
access to all KiFS directoriesoptions
- Optional parameters. The default value is an empty
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionProcResponse grantPermissionProc(GrantPermissionProcRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionProcResponse grantPermissionProc(String name, String permission, String procName, Map<String,String> options) throws GPUdbException
name
- Name of the user or role to which the permission will be
granted. Must be an existing user or role.permission
- Permission to grant to the user or role.
Supported values:
PROC_ADMIN
: Admin access to the proc.
PROC_EXECUTE
: Execute access to the proc.
procName
- Name of the proc to which the permission grants access.
Must be an existing proc, or an empty string to grant
access to all procs.options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionSystemResponse grantPermissionSystem(GrantPermissionSystemRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionSystemResponse grantPermissionSystem(String name, String permission, Map<String,String> options) throws GPUdbException
name
- Name of the user or role to which the permission will be
granted. Must be an existing user or role.permission
- Permission to grant to the user or role.
Supported values:
SYSTEM_ADMIN
: Full access to all data and
system functions.
SYSTEM_USER_ADMIN
: Access to administer
users and roles that do not have system_admin
permission.
SYSTEM_WRITE
: Read and write access to all
tables.
SYSTEM_READ
: Read-only access to all tables.
options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionTableResponse grantPermissionTable(GrantPermissionTableRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantPermissionTableResponse grantPermissionTable(String name, String permission, String tableName, String filterExpression, Map<String,String> options) throws GPUdbException
name
- Name of the user or role to which the permission will be
granted. Must be an existing user or role.permission
- Permission to grant to the user or role.
Supported values:
TABLE_ADMIN
: Full read/write and
administrative access to the table.
TABLE_INSERT
: Insert access to the table.
TABLE_UPDATE
: Update access to the table.
TABLE_DELETE
: Delete access to the table.
TABLE_READ
: Read access to the table.
tableName
- Name of the table to which the permission grants
access, in [schema_name.]table_name format, using
standard name resolution rules. Must be an
existing table, view, or schema. If a schema, the
permission also applies to tables and views in the
schema.filterExpression
- Optional filter expression to apply to this
grant. Only rows that match the filter will be
affected. The default value is ''.options
- Optional parameters.
COLUMNS
: Apply security to these columns,
comma-separated. The default value is ''.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public GrantRoleResponse grantRole(GrantRoleRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public GrantRoleResponse grantRole(String role, String member, Map<String,String> options) throws GPUdbException
role
- Name of the role in which membership will be granted. Must
be an existing role.member
- Name of the user or role that will be granted membership
in role
. Must be an existing user or role.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public HasPermissionResponse hasPermission(HasPermissionRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public HasPermissionResponse hasPermission(String principal, String object, String objectType, String permission, Map<String,String> options) throws GPUdbException
principal
- Name of the user for which the permission is being
checked. Must be an existing user. If blank, will use
the current user. The default value is ''.object
- Name of object to check for the requested permission. It
is recommended to use a fully-qualified name when
possible.objectType
- The type of object being checked.
Supported values:
CONTEXT
: Context
CREDENTIAL
: Credential
DATASINK
: Data Sink
DATASOURCE
: Data Source
DIRECTORY
: KiFS File Directory
GRAPH
: A Graph object
PROC
: UDF Procedure
SCHEMA
: Schema
SQL_PROC
: SQL Procedure
SYSTEM
: System-level access
TABLE
: Database Table
TABLE_MONITOR
: Table monitor
permission
- Permission to check for.
Supported values:
ADMIN
: Full read/write and administrative
access on the object.
CONNECT
: Connect access on the given data
source or data sink.
CREATE
: Ability to create new objects of
this type.
DELETE
: Delete rows from tables.
EXECUTE
: Ability to Execute the Procedure
object.
INSERT
: Insert access to tables.
READ
: Ability to read, list and use the
object.
UPDATE
: Update access to the table.
USER_ADMIN
: Access to administer users and
roles that do not have system_admin
permission.
WRITE
: Access to write, change and delete
objects.
options
- Optional parameters.
NO_ERROR_IF_NOT_EXISTS
: If FALSE
will return an error if the provided
object
does not exist or is blank. If
TRUE
then it will return FALSE
for hasPermission
.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public HasProcResponse hasProc(HasProcRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public HasProcResponse hasProc(String procName, Map<String,String> options) throws GPUdbException
procName
- Name of the proc to check for existence.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public HasRoleResponse hasRole(HasRoleRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public HasRoleResponse hasRole(String principal, String role, Map<String,String> options) throws GPUdbException
principal
- Name of the user for which role membersih is being
checked. Must be an existing user. If blank, will use
the current user. The default value is ''.role
- Name of role to check for membership.options
- Optional parameters.
NO_ERROR_IF_NOT_EXISTS
: If FALSE
will return an error if the provided
role
does not exist or is blank. If
TRUE
then it will return FALSE
for hasRole
.
Supported values:
The default value is FALSE
.
ONLY_DIRECT
: If FALSE
will search recursively if the principal
is a member of role
. If
TRUE
then principal
must directly be a
member of role
.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public HasSchemaResponse hasSchema(HasSchemaRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public HasSchemaResponse hasSchema(String schemaName, Map<String,String> options) throws GPUdbException
schemaName
- Name of the schema to check for existence, in root,
using standard name resolution rules.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public HasTableResponse hasTable(HasTableRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public HasTableResponse hasTable(String tableName, Map<String,String> options) throws GPUdbException
tableName
- Name of the table to check for existence, in
[schema_name.]table_name format, using standard name resolution rules.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public HasTypeResponse hasType(HasTypeRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public HasTypeResponse hasType(String typeId, Map<String,String> options) throws GPUdbException
typeId
- Id of the type returned in response to createType
request.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ImportModelResponse importModel(ImportModelRequest request) throws GPUdbException
GPUdbException
public ImportModelResponse importModel(String modelName, String registryName, String container, String runFunction, String modelType, Map<String,String> options) throws GPUdbException
GPUdbException
public InsertRecordsResponse insertRecordsRaw(RawInsertRecordsRequest request) throws GPUdbException
The options
parameter can be used to customize this function's behavior.
The UPDATE_ON_EXISTING_PK
option specifies the record collision policy for
inserting into a table with a primary key, but is ignored if no primary key exists.
The RETURN_RECORD_IDS
option indicates that the database should return the
unique identifiers of inserted records.
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public <TRequest> InsertRecordsResponse insertRecords(InsertRecordsRequest<TRequest> request) throws GPUdbException
The options
parameter can be used to customize this function's behavior.
The UPDATE_ON_EXISTING_PK
option specifies the record collision policy for
inserting into a table with a primary key, but is ignored if no primary key exists.
The RETURN_RECORD_IDS
option indicates that the database should return the
unique identifiers of inserted records.
TRequest
- The type of object being added.request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public <TRequest> InsertRecordsResponse insertRecords(TypeObjectMap<TRequest> typeObjectMap, InsertRecordsRequest<TRequest> request) throws GPUdbException
The options
parameter can be used to customize this function's behavior.
The UPDATE_ON_EXISTING_PK
option specifies the record collision policy for
inserting into a table with a primary key, but is ignored if no primary key exists.
The RETURN_RECORD_IDS
option indicates that the database should return the
unique identifiers of inserted records.
TRequest
- The type of object being added.typeObjectMap
- Type object map used for encoding input objects.request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.IllegalArgumentException
- if typeObjectMap
is not an
instance of one of the following:
Type
, TypeObjectMap
,
Schema
, or a
Class
that implements IndexedRecord
GPUdbException
- if an error occurs during the operation.public <TRequest> InsertRecordsResponse insertRecords(String tableName, List<TRequest> data, Map<String,String> options) throws GPUdbException
The options
parameter can be used to customize this function's
behavior.
The UPDATE_ON_EXISTING_PK
option specifies the record collision policy for
inserting into a table with a primary key, but is ignored if no primary key exists.
The RETURN_RECORD_IDS
option indicates that the database should return the
unique identifiers of inserted records.
TRequest
- The type of object being added.tableName
- Name of table to which the records are to be added, in
[schema_name.]table_name format, using standard name resolution rules. Must be an
existing table.data
- An array of binary-encoded data for the records to be
added. All records must be of the same type as that of the
table. Empty array if listEncoding
is JSON
.options
- Optional parameters.
UPDATE_ON_EXISTING_PK
: Specifies the record
collision policy for inserting into a table with
a primary key. If set to TRUE
, any existing table record with primary
key values that match those of a record being
inserted will be replaced by that new record
(the new data will be "upserted"). If set to
FALSE
, any existing table record with primary
key values that match those of a record being
inserted will remain unchanged, while the new
record will be rejected and the error handled as
determined by IGNORE_EXISTING_PK
, ALLOW_PARTIAL_BATCH
, & RETURN_INDIVIDUAL_ERRORS
. If the specified
table does not have a primary key, then this
option has no effect.
Supported values:
TRUE
: Upsert new records when primary
keys match existing records
FALSE
: Reject new records when primary
keys match existing records
FALSE
.
IGNORE_EXISTING_PK
: Specifies the record
collision error-suppression policy for inserting
into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
UPDATE_ON_EXISTING_PK
is FALSE
). If set to TRUE
, any record being inserted that is
rejected for having primary key values that
match those of an existing table record will be
ignored with no error generated. If FALSE
, the rejection of any record for having
primary key values matching an existing record
will result in an error being reported, as
determined by ALLOW_PARTIAL_BATCH
& RETURN_INDIVIDUAL_ERRORS
. If the specified
table does not have a primary key or if upsert
mode is in effect (UPDATE_ON_EXISTING_PK
is TRUE
), then this option has no effect.
Supported values:
TRUE
: Ignore new records whose primary
key values collide with those of
existing records
FALSE
: Treat as errors any new records
whose primary key values collide with
those of existing records
FALSE
.
RETURN_RECORD_IDS
: If TRUE
then return the internal record id along
for each inserted record.
Supported values:
The default value is FALSE
.
TRUNCATE_STRINGS
: If set to TRUE
, any strings which are too long for their
target charN string columns will be truncated to
fit.
Supported values:
The default value is FALSE
.
RETURN_INDIVIDUAL_ERRORS
: If set to TRUE
, success will always be returned, and any
errors found will be included in the info map.
The "bad_record_indices" entry is a
comma-separated list of bad records (0-based).
And if so, there will also be an "error_N" entry
for each record with an error, where N is the
index (0-based).
Supported values:
The default value is FALSE
.
ALLOW_PARTIAL_BATCH
: If set to TRUE
, all correct records will be inserted and
incorrect records will be rejected and reported.
Otherwise, the entire batch will be rejected if
any records are incorrect.
Supported values:
The default value is FALSE
.
DRY_RUN
: If set to TRUE
, no data will be saved and any errors will
be returned.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public <TRequest> InsertRecordsResponse insertRecords(TypeObjectMap<TRequest> typeObjectMap, String tableName, List<TRequest> data, Map<String,String> options) throws GPUdbException
The options
parameter can be used to customize this function's
behavior.
The UPDATE_ON_EXISTING_PK
option specifies the record collision policy for
inserting into a table with a primary key, but is ignored if no primary key exists.
The RETURN_RECORD_IDS
option indicates that the database should return the
unique identifiers of inserted records.
TRequest
- The type of object being added.typeObjectMap
- Type object map used for encoding input objects.tableName
- Name of table to which the records are to be added, in
[schema_name.]table_name format, using standard name resolution rules. Must be an
existing table.data
- An array of binary-encoded data for the records to be
added. All records must be of the same type as that of the
table. Empty array if listEncoding
is JSON
.options
- Optional parameters.
UPDATE_ON_EXISTING_PK
: Specifies the record
collision policy for inserting into a table with
a primary key. If set to TRUE
, any existing table record with primary
key values that match those of a record being
inserted will be replaced by that new record
(the new data will be "upserted"). If set to
FALSE
, any existing table record with primary
key values that match those of a record being
inserted will remain unchanged, while the new
record will be rejected and the error handled as
determined by IGNORE_EXISTING_PK
, ALLOW_PARTIAL_BATCH
, & RETURN_INDIVIDUAL_ERRORS
. If the specified
table does not have a primary key, then this
option has no effect.
Supported values:
TRUE
: Upsert new records when primary
keys match existing records
FALSE
: Reject new records when primary
keys match existing records
FALSE
.
IGNORE_EXISTING_PK
: Specifies the record
collision error-suppression policy for inserting
into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
UPDATE_ON_EXISTING_PK
is FALSE
). If set to TRUE
, any record being inserted that is
rejected for having primary key values that
match those of an existing table record will be
ignored with no error generated. If FALSE
, the rejection of any record for having
primary key values matching an existing record
will result in an error being reported, as
determined by ALLOW_PARTIAL_BATCH
& RETURN_INDIVIDUAL_ERRORS
. If the specified
table does not have a primary key or if upsert
mode is in effect (UPDATE_ON_EXISTING_PK
is TRUE
), then this option has no effect.
Supported values:
TRUE
: Ignore new records whose primary
key values collide with those of
existing records
FALSE
: Treat as errors any new records
whose primary key values collide with
those of existing records
FALSE
.
RETURN_RECORD_IDS
: If TRUE
then return the internal record id along
for each inserted record.
Supported values:
The default value is FALSE
.
TRUNCATE_STRINGS
: If set to TRUE
, any strings which are too long for their
target charN string columns will be truncated to
fit.
Supported values:
The default value is FALSE
.
RETURN_INDIVIDUAL_ERRORS
: If set to TRUE
, success will always be returned, and any
errors found will be included in the info map.
The "bad_record_indices" entry is a
comma-separated list of bad records (0-based).
And if so, there will also be an "error_N" entry
for each record with an error, where N is the
index (0-based).
Supported values:
The default value is FALSE
.
ALLOW_PARTIAL_BATCH
: If set to TRUE
, all correct records will be inserted and
incorrect records will be rejected and reported.
Otherwise, the entire batch will be rejected if
any records are incorrect.
Supported values:
The default value is FALSE
.
DRY_RUN
: If set to TRUE
, no data will be saved and any errors will
be returned.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.IllegalArgumentException
- if typeObjectMap
is not an
instance of one of the following:
Type
, TypeObjectMap
,
Schema
, or a
Class
that implements IndexedRecord
GPUdbException
- if an error occurs during the operation.public InsertRecordsFromFilesResponse insertRecordsFromFiles(InsertRecordsFromFilesRequest request) throws GPUdbException
For delimited text files, there are two loading schemes: positional and
name-based. The name-based loading scheme is enabled when the file has a
header present and TEXT_HAS_HEADER
is set to TRUE
. In
this scheme, the source file(s) field names must match the target
table's column names exactly; however, the source file can have more
fields than the target table has columns. If ERROR_HANDLING
is set to PERMISSIVE
, the source file can have fewer fields than the target table
has columns. If the name-based loading scheme is being used, names
matching the file header's names may be provided to COLUMNS_TO_LOAD
instead of numbers, but ranges are not supported.
Note: Due to data being loaded in parallel, there is no insertion order guaranteed. For tables with primary keys, in the case of a primary key collision, this means it is indeterminate which record will be inserted first and remain, while the rest of the colliding key records are discarded.
Returns once all files are processed.
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public InsertRecordsFromFilesResponse insertRecordsFromFiles(String tableName, List<String> filepaths, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options) throws GPUdbException
For delimited text files, there are two loading schemes: positional and
name-based. The name-based loading scheme is enabled when the file has a
header present and TEXT_HAS_HEADER
is set to TRUE
. In
this scheme, the source file(s) field names must match the target
table's column names exactly; however, the source file can have more
fields than the target table has columns. If ERROR_HANDLING
is set to PERMISSIVE
, the source file can have fewer fields than the target table
has columns. If the name-based loading scheme is being used, names
matching the file header's names may be provided to COLUMNS_TO_LOAD
instead of numbers, but ranges are not supported.
Note: Due to data being loaded in parallel, there is no insertion order guaranteed. For tables with primary keys, in the case of a primary key collision, this means it is indeterminate which record will be inserted first and remain, while the rest of the colliding key records are discarded.
Returns once all files are processed.
tableName
- Name of the table into which the data will be
inserted, in [schema_name.]table_name format, using
standard name resolution rules. If the table
does not exist, the table will be created using either
an existing TYPE_ID
or the type inferred from the file, and the
new table name will have to meet standard table naming criteria.filepaths
- A list of file paths from which data will be sourced;
For paths in KiFS, use the uri prefix of kifs://
followed by the path to a file or directory. File
matching by prefix is supported, e.g. kifs://dir/file
would match dir/file_1 and dir/file_2. When prefix
matching is used, the path must start with a full,
valid KiFS directory name. If an external data source
is specified in DATASOURCE_NAME
, these file paths must resolve to
accessible files at that data source location. Prefix
matching is supported. If the data source is hdfs,
prefixes must be aligned with directories, i.e.
partial file names will not match. If no data source
is specified, the files are assumed to be local to the
database and must all be accessible to the gpudb user,
residing on the path (or relative to the path)
specified by the external files directory in the
Kinetica configuration file. Wildcards (*)
can be used to specify a group of files. Prefix
matching is supported, the prefixes must be aligned
with directories. If the first path ends in .tsv, the
text delimiter will be defaulted to a tab character.
If the first path ends in .psv, the text delimiter
will be defaulted to a pipe character (|).modifyColumns
- Not implemented yet. The default value is an empty
Map
.createTableOptions
- Options from createTable
, allowing the
structure of the table to be defined
independently of the data source, when
creating the target table.
TYPE_ID
: ID of a currently
registered type.
NO_ERROR_IF_EXISTS
: If TRUE
, prevents an error from
occurring if the table already exists
and is of the given type. If a table
with the same name but a different
type exists, it is still an error.
Supported values:
The default value is FALSE
.
IS_REPLICATED
: Affects the distribution scheme
for the table's data. If TRUE
and the given table has no
explicit shard key defined,
the table will be replicated. If
FALSE
, the table will be sharded according
to the shard key specified in the
given TYPE_ID
, or randomly sharded,
if no shard key is specified. Note
that a type containing a shard key
cannot be used to create a replicated
table.
Supported values:
The default value is FALSE
.
FOREIGN_KEYS
: Semicolon-separated
list of foreign keys, of
the format '(source_column_name [,
...]) references
target_table_name(primary_key_column_name
[, ...]) [as foreign_key_name]'.
FOREIGN_SHARD_KEY
: Foreign shard key
of the format 'source_column
references shard_by_column from
target_table(primary_key_column)'.
PARTITION_TYPE
: Partitioning scheme
to use.
Supported values:
RANGE
: Use range
partitioning.
INTERVAL
: Use interval
partitioning.
LIST
: Use list
partitioning.
HASH
: Use hash
partitioning.
SERIES
: Use series
partitioning.
PARTITION_KEYS
: Comma-separated list
of partition keys, which are the
columns or column expressions by
which records will be assigned to
partitions defined by PARTITION_DEFINITIONS
.
PARTITION_DEFINITIONS
:
Comma-separated list of partition
definitions, whose format depends on
the choice of PARTITION_TYPE
. See range partitioning,
interval
partitioning, list partitioning,
hash partitioning,
or series partitioning
for example formats.
IS_AUTOMATIC_PARTITION
: If TRUE
, a new partition will be
created for values which don't fall
into an existing partition.
Currently, only supported for list partitions.
Supported values:
The default value is FALSE
.
TTL
: Sets the TTL of the table
specified in tableName
.
CHUNK_SIZE
: Indicates the number of
records per chunk to be used for this
table.
CHUNK_COLUMN_MAX_MEMORY
: Indicates
the target maximum data size for each
column in a chunk to be used for this
table.
CHUNK_MAX_MEMORY
: Indicates the
target maximum data size for all
columns in a chunk to be used for
this table.
IS_RESULT_TABLE
: Indicates whether
the table is a memory-only table.
A result table cannot contain columns
with text_search data-handling, and
it will not be retained if the server
is restarted.
Supported values:
The default value is FALSE
.
STRATEGY_DEFINITION
: The tier strategy for
the table and its columns.
Map
.options
- Optional parameters.
BAD_RECORD_TABLE_NAME
: Name of a table to which
records that were rejected are written. The
bad-record-table has the following columns:
line_number (long), line_rejected (string),
error_message (string). When ERROR_HANDLING
is ABORT
, bad records table is not populated.
BAD_RECORD_TABLE_LIMIT
: A positive integer
indicating the maximum number of records that
can be written to the bad-record-table. The
default value is '10000'.
BAD_RECORD_TABLE_LIMIT_PER_INPUT
: For
subscriptions, a positive integer indicating the
maximum number of records that can be written to
the bad-record-table per file/payload. Default
value will be BAD_RECORD_TABLE_LIMIT
and total size of the
table per rank is limited to BAD_RECORD_TABLE_LIMIT
.
BATCH_SIZE
: Number of records to insert per
batch when inserting data. The default value is
'50000'.
COLUMN_FORMATS
: For each target column
specified, applies the column-property-bound
format to the source data loaded into that
column. Each column format will contain a
mapping of one or more of its column properties
to an appropriate format for each property.
Currently supported column properties include
date, time, & datetime. The parameter value must
be formatted as a JSON string of maps of column
names to maps of column properties to their
corresponding column formats, e.g., '{
"order_date" : { "date" : "%Y.%m.%d" },
"order_time" : { "time" : "%H:%M:%S" } }'. See
DEFAULT_COLUMN_FORMATS
for valid format syntax.
COLUMNS_TO_LOAD
: Specifies a comma-delimited
list of columns from the source data to load.
If more than one file is being loaded, this list
applies to all files. Column numbers can be
specified discretely or as a range. For
example, a value of '5,7,1..3' will insert
values from the fifth column in the source data
into the first column in the target table, from
the seventh column in the source data into the
second column in the target table, and from the
first through third columns in the source data
into the third through fifth columns in the
target table. If the source data contains a
header, column names matching the file header
names may be provided instead of column numbers.
If the target table doesn't exist, the table
will be created with the columns in this order.
If the target table does exist with columns in a
different order than the source data, this list
can be used to match the order of the target
table. For example, a value of 'C, B, A' will
create a three column table with column C,
followed by column B, followed by column A; or
will insert those fields in that order into a
table created with columns in that order. If
the target table exists, the column names must
match the source data field names for a
name-mapping to be successful. Mutually
exclusive with COLUMNS_TO_SKIP
.
COLUMNS_TO_SKIP
: Specifies a comma-delimited
list of columns from the source data to skip.
Mutually exclusive with COLUMNS_TO_LOAD
.
COMPRESSION_TYPE
: Source data compression type.
Supported values:
NONE
: No compression.
AUTO
: Auto detect compression type
GZIP
: gzip file compression.
BZIP2
: bzip2 file compression.
AUTO
.
DATASOURCE_NAME
: Name of an existing external
data source from which data file(s) specified in
filepaths
will be loaded
DEFAULT_COLUMN_FORMATS
: Specifies the default
format to be applied to source data loaded into
columns with the corresponding column property.
Currently supported column properties include
date, time, & datetime. This default
column-property-bound format can be overridden
by specifying a column property & format for a
given target column in COLUMN_FORMATS
. For each specified annotation,
the format will apply to all columns with that
annotation unless a custom COLUMN_FORMATS
for that annotation is
specified. The parameter value must be
formatted as a JSON string that is a map of
column properties to their respective column
formats, e.g., '{ "date" : "%Y.%m.%d", "time" :
"%H:%M:%S" }'. Column formats are specified as
a string of control characters and plain text.
The supported control characters are 'Y', 'm',
'd', 'H', 'M', 'S', and 's', which follow the
Linux 'strptime()' specification, as well as
's', which specifies seconds and fractional
seconds (though the fractional component will be
truncated past milliseconds). Formats for the
'date' annotation must include the 'Y', 'm', and
'd' control characters. Formats for the 'time'
annotation must include the 'H', 'M', and either
'S' or 's' (but not both) control characters.
Formats for the 'datetime' annotation meet both
the 'date' and 'time' control character
requirements. For example, '{"datetime" :
"%m/%d/%Y %H:%M:%S" }' would be used to
interpret text as "05/04/2000 12:12:11"
ERROR_HANDLING
: Specifies how errors should be
handled upon insertion.
Supported values:
PERMISSIVE
: Records with missing
columns are populated with nulls if
possible; otherwise, the malformed
records are skipped.
IGNORE_BAD_RECORDS
: Malformed records
are skipped.
ABORT
: Stops current insertion and
aborts entire operation when an error is
encountered. Primary key collisions are
considered abortable errors in this
mode.
ABORT
.
FILE_TYPE
: Specifies the type of the file(s)
whose records will be inserted.
Supported values:
AVRO
: Avro file format
DELIMITED_TEXT
: Delimited text file
format; e.g., CSV, TSV, PSV, etc.
GDB
: Esri/GDB file format
JSON
: Json file format
PARQUET
: Apache Parquet file format
SHAPEFILE
: ShapeFile file format
DELIMITED_TEXT
.
FLATTEN_COLUMNS
: Specifies how to handle nested
columns.
Supported values:
TRUE
: Break up nested columns to
multiple columns
FALSE
: Treat nested columns as json
columns instead of flattening
FALSE
.
GDAL_CONFIGURATION_OPTIONS
: Comma separated
list of gdal conf options, for the specific
requets: key=value
IGNORE_EXISTING_PK
: Specifies the record
collision error-suppression policy for inserting
into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
UPDATE_ON_EXISTING_PK
is FALSE
). If set to TRUE
, any record being inserted that is
rejected for having primary key values that
match those of an existing table record will be
ignored with no error generated. If FALSE
, the rejection of any record for having
primary key values matching an existing record
will result in an error being reported, as
determined by ERROR_HANDLING
. If the specified table does
not have a primary key or if upsert mode is in
effect (UPDATE_ON_EXISTING_PK
is TRUE
), then this option has no effect.
Supported values:
TRUE
: Ignore new records whose primary
key values collide with those of
existing records
FALSE
: Treat as errors any new records
whose primary key values collide with
those of existing records
FALSE
.
INGESTION_MODE
: Whether to do a full load, dry
run, or perform a type inference on the source
data.
Supported values:
FULL
: Run a type inference on the
source data (if needed) and ingest
DRY_RUN
: Does not load data, but walks
through the source data and determines
the number of valid records, taking into
account the current mode of ERROR_HANDLING
.
TYPE_INFERENCE_ONLY
: Infer the type of
the source data and return, without
ingesting any data. The inferred type
is returned in the response.
FULL
.
KAFKA_CONSUMERS_PER_RANK
: Number of Kafka
consumer threads per rank (valid range 1-6). The
default value is '1'.
KAFKA_GROUP_ID
: The group id to be used when
consuming data from a Kafka topic (valid only
for Kafka datasource subscriptions).
KAFKA_OFFSET_RESET_POLICY
: Policy to determine
whether the Kafka data consumption starts either
at earliest offset or latest offset.
Supported values:
The default value is EARLIEST
.
KAFKA_OPTIMISTIC_INGEST
: Enable optimistic
ingestion where Kafka topic offsets and table
data are committed independently to achieve
parallelism.
Supported values:
The default value is FALSE
.
KAFKA_SUBSCRIPTION_CANCEL_AFTER
: Sets the Kafka
subscription lifespan (in minutes). Expired
subscription will be cancelled automatically.
KAFKA_TYPE_INFERENCE_FETCH_TIMEOUT
: Maximum
time to collect Kafka messages before type
inferencing on the set of them.
LAYER
: Geo files layer(s) name(s): comma
separated.
LOADING_MODE
: Scheme for distributing the
extraction and loading of data from the source
data file(s). This option applies only when
loading files that are local to the database.
Supported values:
HEAD
: The head node loads all data. All
files must be available to the head
node.
DISTRIBUTED_SHARED
: The head node
coordinates loading data by worker
processes across all nodes from shared
files available to all workers. NOTE:
Instead of existing on a shared source,
the files can be duplicated on a source
local to each host to improve
performance, though the files must
appear as the same data set from the
perspective of all hosts performing the
load.
DISTRIBUTED_LOCAL
: A single worker
process on each node loads all files
that are available to it. This option
works best when each worker loads files
from its own file system, to maximize
performance. In order to avoid data
duplication, either each worker
performing the load needs to have
visibility to a set of files unique to
it (no file is visible to more than one
node) or the target table needs to have
a primary key (which will allow the
worker to automatically deduplicate
data). NOTE: If the target table
doesn't exist, the table structure will
be determined by the head node. If the
head node has no files local to it, it
will be unable to determine the
structure and the request will fail. If
the head node is configured to have no
worker processes, no data strictly
accessible to the head node will be
loaded.
HEAD
.
LOCAL_TIME_OFFSET
: Apply an offset to Avro
local timestamp columns.
MAX_RECORDS_TO_LOAD
: Limit the number of
records to load in this request: if this number
is larger than BATCH_SIZE
, then the number of records loaded
will be limited to the next whole number of
BATCH_SIZE
(per working thread).
NUM_TASKS_PER_RANK
: Number of tasks for reading
file per rank. Default will be system
configuration parameter,
external_file_reader_num_tasks.
POLL_INTERVAL
: If TRUE
, the number of seconds between attempts to
load external files into the table. If zero,
polling will be continuous as long as data is
found. If no data is found, the interval will
steadily increase to a maximum of 60 seconds.
The default value is '0'.
PRIMARY_KEYS
: Comma separated list of column
names to set as primary keys, when not specified
in the type.
SCHEMA_REGISTRY_SCHEMA_NAME
: Name of the Avro
schema in the schema registry to use when
reading Avro records.
SHARD_KEYS
: Comma separated list of column
names to set as shard keys, when not specified
in the type.
SKIP_LINES
: Skip number of lines from begining
of file.
START_OFFSETS
: Starting offsets by partition to
fetch from kafka. A comma separated list of
partition:offset pairs.
SUBSCRIBE
: Continuously poll the data source to
check for new data and load it into the table.
Supported values:
The default value is FALSE
.
TABLE_INSERT_MODE
: Insertion scheme to use when
inserting records from multiple shapefiles.
Supported values:
SINGLE
: Insert all records into a
single table.
TABLE_PER_FILE
: Insert records from
each file into a new table corresponding
to that file.
SINGLE
.
TEXT_COMMENT_STRING
: Specifies the character
string that should be interpreted as a comment
line prefix in the source data. All lines in
the data starting with the provided string are
ignored. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '#'.
TEXT_DELIMITER
: Specifies the character
delimiting field values in the source data and
field names in the header (if present). For
DELIMITED_TEXT
FILE_TYPE
only. The default value is ','.
TEXT_ESCAPE_CHARACTER
: Specifies the character
that is used to escape other characters in the
source data. An 'a', 'b', 'f', 'n', 'r', 't',
or 'v' preceded by an escape character will be
interpreted as the ASCII bell, backspace, form
feed, line feed, carriage return, horizontal
tab, & vertical tab, respectively. For example,
the escape character followed by an 'n' will be
interpreted as a newline within a field value.
The escape character can also be used to escape
the quoting character, and will be treated as an
escape character whether it is within a quoted
field value or not. For DELIMITED_TEXT
FILE_TYPE
only.
TEXT_HAS_HEADER
: Indicates whether the source
data contains a header row. For DELIMITED_TEXT
FILE_TYPE
only.
Supported values:
The default value is TRUE
.
TEXT_HEADER_PROPERTY_DELIMITER
: Specifies the
delimiter for column properties in the
header row (if present). Cannot be set to same
value as TEXT_DELIMITER
. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '|'.
TEXT_NULL_STRING
: Specifies the character
string that should be interpreted as a null
value in the source data. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '\N'.
TEXT_QUOTE_CHARACTER
: Specifies the character
that should be interpreted as a field value
quoting character in the source data. The
character must appear at beginning and end of
field value to take effect. Delimiters within
quoted fields are treated as literals and not
delimiters. Within a quoted field, two
consecutive quote characters will be interpreted
as a single literal quote character, effectively
escaping it. To not have a quote character,
specify an empty string. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '"'.
TEXT_SEARCH_COLUMNS
: Add 'text_search' property
to internally inferenced string columns. Comma
seperated list of column names or '*' for all
columns. To add 'text_search' property only to
string columns greater than or equal to a
minimum size, also set the TEXT_SEARCH_MIN_COLUMN_LENGTH
TEXT_SEARCH_MIN_COLUMN_LENGTH
: Set the minimum
column size for strings to apply the
'text_search' property to. Used only when TEXT_SEARCH_COLUMNS
has a value.
TRUNCATE_STRINGS
: If set to TRUE
, truncate string values that are longer
than the column's type size.
Supported values:
The default value is FALSE
.
TRUNCATE_TABLE
: If set to TRUE
, truncates the table specified by tableName
prior to loading the file(s).
Supported values:
The default value is FALSE
.
TYPE_INFERENCE_MODE
: Optimize type inferencing
for either speed or accuracy.
Supported values:
ACCURACY
: Scans data to get
exactly-typed & sized columns for all
data scanned.
SPEED
: Scans data and picks the widest
possible column types so that 'all'
values will fit with minimum data
scanned
ACCURACY
.
UPDATE_ON_EXISTING_PK
: Specifies the record
collision policy for inserting into a table with
a primary key. If set to TRUE
, any existing table record with primary
key values that match those of a record being
inserted will be replaced by that new record
(the new data will be 'upserted'). If set to
FALSE
, any existing table record with primary
key values that match those of a record being
inserted will remain unchanged, while the new
record will be rejected and the error handled as
determined by IGNORE_EXISTING_PK
& ERROR_HANDLING
. If the specified table does
not have a primary key, then this option has no
effect.
Supported values:
TRUE
: Upsert new records when primary
keys match existing records
FALSE
: Reject new records when primary
keys match existing records
FALSE
.
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public InsertRecordsFromPayloadResponse insertRecordsFromPayload(InsertRecordsFromPayloadRequest request) throws GPUdbException
Returns once all records are processed.
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public InsertRecordsFromPayloadResponse insertRecordsFromPayload(String tableName, String dataText, ByteBuffer dataBytes, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options) throws GPUdbException
Returns once all records are processed.
tableName
- Name of the table into which the data will be
inserted, in [schema_name.]table_name format, using
standard name resolution rules. If the table
does not exist, the table will be created using either
an existing TYPE_ID
or the type inferred from the payload, and
the new table name will have to meet standard table naming criteria.dataText
- Records formatted as delimited textdataBytes
- Records formatted as binary datamodifyColumns
- Not implemented yet. The default value is an empty
Map
.createTableOptions
- Options used when creating the target table.
Includes type to use. The other options match
those in createTable
.
TYPE_ID
: ID of a currently
registered type. The default
value is ''.
NO_ERROR_IF_EXISTS
: If TRUE
, prevents an error from
occurring if the table already exists
and is of the given type. If a table
with the same ID but a different type
exists, it is still an error.
Supported values:
The default value is FALSE
.
IS_REPLICATED
: Affects the distribution scheme
for the table's data. If TRUE
and the given type has no
explicit shard key defined,
the table will be replicated. If
FALSE
, the table will be sharded according
to the shard key specified in the
given TYPE_ID
, or randomly sharded,
if no shard key is specified. Note
that a type containing a shard key
cannot be used to create a replicated
table.
Supported values:
The default value is FALSE
.
FOREIGN_KEYS
: Semicolon-separated
list of foreign keys, of
the format '(source_column_name [,
...]) references
target_table_name(primary_key_column_name
[, ...]) [as foreign_key_name]'.
FOREIGN_SHARD_KEY
: Foreign shard key
of the format 'source_column
references shard_by_column from
target_table(primary_key_column)'.
PARTITION_TYPE
: Partitioning scheme
to use.
Supported values:
RANGE
: Use range
partitioning.
INTERVAL
: Use interval
partitioning.
LIST
: Use list
partitioning.
HASH
: Use hash
partitioning.
SERIES
: Use series
partitioning.
PARTITION_KEYS
: Comma-separated list
of partition keys, which are the
columns or column expressions by
which records will be assigned to
partitions defined by PARTITION_DEFINITIONS
.
PARTITION_DEFINITIONS
:
Comma-separated list of partition
definitions, whose format depends on
the choice of PARTITION_TYPE
. See range partitioning,
interval
partitioning, list partitioning,
hash partitioning,
or series partitioning
for example formats.
IS_AUTOMATIC_PARTITION
: If TRUE
, a new partition will be
created for values which don't fall
into an existing partition.
Currently only supported for list partitions.
Supported values:
The default value is FALSE
.
TTL
: Sets the TTL of the table
specified in tableName
.
CHUNK_SIZE
: Indicates the number of
records per chunk to be used for this
table.
CHUNK_COLUMN_MAX_MEMORY
: Indicates
the target maximum data size for each
column in a chunk to be used for this
table.
CHUNK_MAX_MEMORY
: Indicates the
target maximum data size for all
columns in a chunk to be used for
this table.
IS_RESULT_TABLE
: Indicates whether
the table is a memory-only table.
A result table cannot contain columns
with text_search data-handling, and
it will not be retained if the server
is restarted.
Supported values:
The default value is FALSE
.
STRATEGY_DEFINITION
: The tier strategy for
the table and its columns.
Map
.options
- Optional parameters.
AVRO_HEADER_BYTES
: Optional number of bytes to
skip when reading an avro record.
AVRO_NUM_RECORDS
: Optional number of avro
records, if data includes only records.
AVRO_SCHEMA
: Optional string representing avro
schema, for insert records in avro format, that
does not include is schema.
AVRO_SCHEMALESS
: When user provides
'avro_schema', avro data is assumed to be
schemaless, unless specified. Default is 'true'
when given avro_schema. Igonred when avro_schema
is not given.
Supported values:
BAD_RECORD_TABLE_NAME
: Optional name of a table
to which records that were rejected are written.
The bad-record-table has the following columns:
line_number (long), line_rejected (string),
error_message (string).
BAD_RECORD_TABLE_LIMIT
: A positive integer
indicating the maximum number of records that
can be written to the bad-record-table.
Default value is 10000
BAD_RECORD_TABLE_LIMIT_PER_INPUT
: For
subscriptions: A positive integer indicating the
maximum number of records that can be written to
the bad-record-table per file/payload. Default
value will be 'bad_record_table_limit' and total
size of the table per rank is limited to
'bad_record_table_limit'
BATCH_SIZE
: Internal tuning parameter--number
of records per batch when inserting data.
COLUMN_FORMATS
: For each target column
specified, applies the column-property-bound
format to the source data loaded into that
column. Each column format will contain a
mapping of one or more of its column properties
to an appropriate format for each property.
Currently supported column properties include
date, time, & datetime. The parameter value must
be formatted as a JSON string of maps of column
names to maps of column properties to their
corresponding column formats, e.g., '{
"order_date" : { "date" : "%Y.%m.%d" },
"order_time" : { "time" : "%H:%M:%S" } }'. See
DEFAULT_COLUMN_FORMATS
for valid format syntax.
COLUMNS_TO_LOAD
: Specifies a comma-delimited
list of columns from the source data to load.
If more than one file is being loaded, this list
applies to all files. Column numbers can be
specified discretely or as a range. For
example, a value of '5,7,1..3' will insert
values from the fifth column in the source data
into the first column in the target table, from
the seventh column in the source data into the
second column in the target table, and from the
first through third columns in the source data
into the third through fifth columns in the
target table. If the source data contains a
header, column names matching the file header
names may be provided instead of column numbers.
If the target table doesn't exist, the table
will be created with the columns in this order.
If the target table does exist with columns in a
different order than the source data, this list
can be used to match the order of the target
table. For example, a value of 'C, B, A' will
create a three column table with column C,
followed by column B, followed by column A; or
will insert those fields in that order into a
table created with columns in that order. If
the target table exists, the column names must
match the source data field names for a
name-mapping to be successful. Mutually
exclusive with COLUMNS_TO_SKIP
.
COLUMNS_TO_SKIP
: Specifies a comma-delimited
list of columns from the source data to skip.
Mutually exclusive with COLUMNS_TO_LOAD
.
COMPRESSION_TYPE
: Optional: payload compression
type.
Supported values:
NONE
: Uncompressed
AUTO
: Default. Auto detect compression
type
GZIP
: gzip file compression.
BZIP2
: bzip2 file compression.
AUTO
.
DEFAULT_COLUMN_FORMATS
: Specifies the default
format to be applied to source data loaded into
columns with the corresponding column property.
Currently supported column properties include
date, time, & datetime. This default
column-property-bound format can be overridden
by specifying a column property & format for a
given target column in COLUMN_FORMATS
. For each specified annotation,
the format will apply to all columns with that
annotation unless a custom COLUMN_FORMATS
for that annotation is
specified. The parameter value must be
formatted as a JSON string that is a map of
column properties to their respective column
formats, e.g., '{ "date" : "%Y.%m.%d", "time" :
"%H:%M:%S" }'. Column formats are specified as
a string of control characters and plain text.
The supported control characters are 'Y', 'm',
'd', 'H', 'M', 'S', and 's', which follow the
Linux 'strptime()' specification, as well as
's', which specifies seconds and fractional
seconds (though the fractional component will be
truncated past milliseconds). Formats for the
'date' annotation must include the 'Y', 'm', and
'd' control characters. Formats for the 'time'
annotation must include the 'H', 'M', and either
'S' or 's' (but not both) control characters.
Formats for the 'datetime' annotation meet both
the 'date' and 'time' control character
requirements. For example, '{"datetime" :
"%m/%d/%Y %H:%M:%S" }' would be used to
interpret text as "05/04/2000 12:12:11"
ERROR_HANDLING
: Specifies how errors should be
handled upon insertion.
Supported values:
PERMISSIVE
: Records with missing
columns are populated with nulls if
possible; otherwise, the malformed
records are skipped.
IGNORE_BAD_RECORDS
: Malformed records
are skipped.
ABORT
: Stops current insertion and
aborts entire operation when an error is
encountered. Primary key collisions are
considered abortable errors in this
mode.
ABORT
.
FILE_TYPE
: Specifies the type of the file(s)
whose records will be inserted.
Supported values:
AVRO
: Avro file format
DELIMITED_TEXT
: Delimited text file
format; e.g., CSV, TSV, PSV, etc.
GDB
: Esri/GDB file format
JSON
: Json file format
PARQUET
: Apache Parquet file format
SHAPEFILE
: ShapeFile file format
DELIMITED_TEXT
.
FLATTEN_COLUMNS
: Specifies how to handle nested
columns.
Supported values:
TRUE
: Break up nested columns to
multiple columns
FALSE
: Treat nested columns as json
columns instead of flattening
FALSE
.
GDAL_CONFIGURATION_OPTIONS
: Comma separated
list of gdal conf options, for the specific
requets: key=value. The default value is ''.
IGNORE_EXISTING_PK
: Specifies the record
collision error-suppression policy for inserting
into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
UPDATE_ON_EXISTING_PK
is FALSE
). If set to TRUE
, any record being inserted that is
rejected for having primary key values that
match those of an existing table record will be
ignored with no error generated. If FALSE
, the rejection of any record for having
primary key values matching an existing record
will result in an error being reported, as
determined by ERROR_HANDLING
. If the specified table does
not have a primary key or if upsert mode is in
effect (UPDATE_ON_EXISTING_PK
is TRUE
), then this option has no effect.
Supported values:
TRUE
: Ignore new records whose primary
key values collide with those of
existing records
FALSE
: Treat as errors any new records
whose primary key values collide with
those of existing records
FALSE
.
INGESTION_MODE
: Whether to do a full load, dry
run, or perform a type inference on the source
data.
Supported values:
FULL
: Run a type inference on the
source data (if needed) and ingest
DRY_RUN
: Does not load data, but walks
through the source data and determines
the number of valid records, taking into
account the current mode of ERROR_HANDLING
.
TYPE_INFERENCE_ONLY
: Infer the type of
the source data and return, without
ingesting any data. The inferred type
is returned in the response.
FULL
.
LAYER
: Optional: geo files layer(s) name(s):
comma separated. The default value is ''.
LOADING_MODE
: Scheme for distributing the
extraction and loading of data from the source
data file(s). This option applies only when
loading files that are local to the database.
Supported values:
HEAD
: The head node loads all data. All
files must be available to the head
node.
DISTRIBUTED_SHARED
: The head node
coordinates loading data by worker
processes across all nodes from shared
files available to all workers. NOTE:
Instead of existing on a shared source,
the files can be duplicated on a source
local to each host to improve
performance, though the files must
appear as the same data set from the
perspective of all hosts performing the
load.
DISTRIBUTED_LOCAL
: A single worker
process on each node loads all files
that are available to it. This option
works best when each worker loads files
from its own file system, to maximize
performance. In order to avoid data
duplication, either each worker
performing the load needs to have
visibility to a set of files unique to
it (no file is visible to more than one
node) or the target table needs to have
a primary key (which will allow the
worker to automatically deduplicate
data). NOTE: If the target table
doesn't exist, the table structure will
be determined by the head node. If the
head node has no files local to it, it
will be unable to determine the
structure and the request will fail. If
the head node is configured to have no
worker processes, no data strictly
accessible to the head node will be
loaded.
HEAD
.
LOCAL_TIME_OFFSET
: For Avro local timestamp
columns
MAX_RECORDS_TO_LOAD
: Limit the number of
records to load in this request: If this number
is larger than a batch_size, then the number of
records loaded will be limited to the next whole
number of batch_size (per working thread). The
default value is ''.
NUM_TASKS_PER_RANK
: Optional: number of tasks
for reading file per rank. Default will be
external_file_reader_num_tasks
POLL_INTERVAL
: If TRUE
, the number of seconds between attempts to
load external files into the table. If zero,
polling will be continuous as long as data is
found. If no data is found, the interval will
steadily increase to a maximum of 60 seconds.
PRIMARY_KEYS
: Optional: comma separated list of
column names, to set as primary keys, when not
specified in the type. The default value is ''.
SCHEMA_REGISTRY_SCHEMA_ID
SCHEMA_REGISTRY_SCHEMA_NAME
SCHEMA_REGISTRY_SCHEMA_VERSION
SHARD_KEYS
: Optional: comma separated list of
column names, to set as primary keys, when not
specified in the type. The default value is ''.
SKIP_LINES
: Skip number of lines from begining
of file.
SUBSCRIBE
: Continuously poll the data source to
check for new data and load it into the table.
Supported values:
The default value is FALSE
.
TABLE_INSERT_MODE
: Optional: table_insert_mode.
When inserting records from multiple files: if
table_per_file then insert from each file into a
new table. Currently supported only for
shapefiles.
Supported values:
The default value is SINGLE
.
TEXT_COMMENT_STRING
: Specifies the character
string that should be interpreted as a comment
line prefix in the source data. All lines in
the data starting with the provided string are
ignored. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '#'.
TEXT_DELIMITER
: Specifies the character
delimiting field values in the source data and
field names in the header (if present). For
DELIMITED_TEXT
FILE_TYPE
only. The default value is ','.
TEXT_ESCAPE_CHARACTER
: Specifies the character
that is used to escape other characters in the
source data. An 'a', 'b', 'f', 'n', 'r', 't',
or 'v' preceded by an escape character will be
interpreted as the ASCII bell, backspace, form
feed, line feed, carriage return, horizontal
tab, & vertical tab, respectively. For example,
the escape character followed by an 'n' will be
interpreted as a newline within a field value.
The escape character can also be used to escape
the quoting character, and will be treated as an
escape character whether it is within a quoted
field value or not. For DELIMITED_TEXT
FILE_TYPE
only.
TEXT_HAS_HEADER
: Indicates whether the source
data contains a header row. For DELIMITED_TEXT
FILE_TYPE
only.
Supported values:
The default value is TRUE
.
TEXT_HEADER_PROPERTY_DELIMITER
: Specifies the
delimiter for column properties in the
header row (if present). Cannot be set to same
value as TEXT_DELIMITER
. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '|'.
TEXT_NULL_STRING
: Specifies the character
string that should be interpreted as a null
value in the source data. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '\N'.
TEXT_QUOTE_CHARACTER
: Specifies the character
that should be interpreted as a field value
quoting character in the source data. The
character must appear at beginning and end of
field value to take effect. Delimiters within
quoted fields are treated as literals and not
delimiters. Within a quoted field, two
consecutive quote characters will be interpreted
as a single literal quote character, effectively
escaping it. To not have a quote character,
specify an empty string. For DELIMITED_TEXT
FILE_TYPE
only. The default value is '"'.
TEXT_SEARCH_COLUMNS
: Add 'text_search' property
to internally inferenced string columns. Comma
seperated list of column names or '*' for all
columns. To add text_search property only to
string columns of minimum size, set also the
option 'text_search_min_column_length'
TEXT_SEARCH_MIN_COLUMN_LENGTH
: Set minimum
column size. Used only when
'text_search_columns' has a value.
TRUNCATE_STRINGS
: If set to TRUE
, truncate string values that are longer
than the column's type size.
Supported values:
The default value is FALSE
.
TRUNCATE_TABLE
: If set to TRUE
, truncates the table specified by tableName
prior to loading the file(s).
Supported values:
The default value is FALSE
.
TYPE_INFERENCE_MODE
: optimize type inference
for:
Supported values:
ACCURACY
: Scans data to get
exactly-typed & sized columns for all
data scanned.
SPEED
: Scans data and picks the widest
possible column types so that 'all'
values will fit with minimum data
scanned
ACCURACY
.
UPDATE_ON_EXISTING_PK
: Specifies the record
collision policy for inserting into a table with
a primary key. If set to TRUE
, any existing table record with primary
key values that match those of a record being
inserted will be replaced by that new record
(the new data will be "upserted"). If set to
FALSE
, any existing table record with primary
key values that match those of a record being
inserted will remain unchanged, while the new
record will be rejected and the error handled as
determined by IGNORE_EXISTING_PK
& ERROR_HANDLING
. If the specified table does
not have a primary key, then this option has no
effect.
Supported values:
TRUE
: Upsert new records when primary
keys match existing records
FALSE
: Reject new records when primary
keys match existing records
FALSE
.
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public InsertRecordsFromQueryResponse insertRecordsFromQuery(InsertRecordsFromQueryRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public InsertRecordsFromQueryResponse insertRecordsFromQuery(String tableName, String remoteQuery, Map<String,Map<String,String>> modifyColumns, Map<String,String> createTableOptions, Map<String,String> options) throws GPUdbException
tableName
- Name of the table into which the data will be
inserted, in [schema_name.]table_name format, using
standard name resolution rules. If the table
does not exist, the table will be created using either
an existing TYPE_ID
or the type inferred from the remote query,
and the new table name will have to meet standard table naming criteria.remoteQuery
- Query for which result data needs to be importedmodifyColumns
- Not implemented yet. The default value is an empty
Map
.createTableOptions
- Options used when creating the target table.
TYPE_ID
: ID of a currently
registered type. The default
value is ''.
NO_ERROR_IF_EXISTS
: If TRUE
, prevents an error from
occurring if the table already exists
and is of the given type. If a table
with the same ID but a different type
exists, it is still an error.
Supported values:
The default value is FALSE
.
IS_REPLICATED
: Affects the distribution scheme
for the table's data. If TRUE
and the given type has no
explicit shard key defined,
the table will be replicated. If
FALSE
, the table will be sharded according
to the shard key specified in the
given TYPE_ID
, or randomly sharded,
if no shard key is specified. Note
that a type containing a shard key
cannot be used to create a replicated
table.
Supported values:
The default value is FALSE
.
FOREIGN_KEYS
: Semicolon-separated
list of foreign keys, of
the format '(source_column_name [,
...]) references
target_table_name(primary_key_column_name
[, ...]) [as foreign_key_name]'.
FOREIGN_SHARD_KEY
: Foreign shard key
of the format 'source_column
references shard_by_column from
target_table(primary_key_column)'.
PARTITION_TYPE
: Partitioning scheme
to use.
Supported values:
RANGE
: Use range
partitioning.
INTERVAL
: Use interval
partitioning.
LIST
: Use list
partitioning.
HASH
: Use hash
partitioning.
SERIES
: Use series
partitioning.
PARTITION_KEYS
: Comma-separated list
of partition keys, which are the
columns or column expressions by
which records will be assigned to
partitions defined by PARTITION_DEFINITIONS
.
PARTITION_DEFINITIONS
:
Comma-separated list of partition
definitions, whose format depends on
the choice of PARTITION_TYPE
. See range partitioning,
interval
partitioning, list partitioning,
hash partitioning,
or series partitioning
for example formats.
IS_AUTOMATIC_PARTITION
: If TRUE
, a new partition will be
created for values which don't fall
into an existing partition.
Currently only supported for list partitions.
Supported values:
The default value is FALSE
.
TTL
: Sets the TTL of the table
specified in tableName
.
CHUNK_SIZE
: Indicates the number of
records per chunk to be used for this
table.
IS_RESULT_TABLE
: Indicates whether
the table is a memory-only table.
A result table cannot contain columns
with text_search data-handling, and
it will not be retained if the server
is restarted.
Supported values:
The default value is FALSE
.
STRATEGY_DEFINITION
: The tier strategy for
the table and its columns.
Map
.options
- Optional parameters.
BAD_RECORD_TABLE_NAME
: Optional name of a table
to which records that were rejected are written.
The bad-record-table has the following columns:
line_number (long), line_rejected (string),
error_message (string). When error handling is
Abort, bad records table is not populated.
BAD_RECORD_TABLE_LIMIT
: A positive integer
indicating the maximum number of records that
can be written to the bad-record-table.
Default value is 10000
BATCH_SIZE
: Number of records per batch when
inserting data.
DATASOURCE_NAME
: Name of an existing external
data source from which table will be loaded
ERROR_HANDLING
: Specifies how errors should be
handled upon insertion.
Supported values:
PERMISSIVE
: Records with missing
columns are populated with nulls if
possible; otherwise, the malformed
records are skipped.
IGNORE_BAD_RECORDS
: Malformed records
are skipped.
ABORT
: Stops current insertion and
aborts entire operation when an error is
encountered. Primary key collisions are
considered abortable errors in this
mode.
ABORT
.
IGNORE_EXISTING_PK
: Specifies the record
collision error-suppression policy for inserting
into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
UPDATE_ON_EXISTING_PK
is FALSE
). If set to TRUE
, any record being inserted that is
rejected for having primary key values that
match those of an existing table record will be
ignored with no error generated. If FALSE
, the rejection of any record for having
primary key values matching an existing record
will result in an error being reported, as
determined by ERROR_HANDLING
. If the specified table does
not have a primary key or if upsert mode is in
effect (UPDATE_ON_EXISTING_PK
is TRUE
), then this option has no effect.
Supported values:
TRUE
: Ignore new records whose primary
key values collide with those of
existing records
FALSE
: Treat as errors any new records
whose primary key values collide with
those of existing records
FALSE
.
INGESTION_MODE
: Whether to do a full load, dry
run, or perform a type inference on the source
data.
Supported values:
FULL
: Run a type inference on the
source data (if needed) and ingest
DRY_RUN
: Does not load data, but walks
through the source data and determines
the number of valid records, taking into
account the current mode of ERROR_HANDLING
.
TYPE_INFERENCE_ONLY
: Infer the type of
the source data and return, without
ingesting any data. The inferred type
is returned in the response.
FULL
.
JDBC_FETCH_SIZE
: The JDBC fetch size, which
determines how many rows to fetch per round
trip.
JDBC_SESSION_INIT_STATEMENT
: Executes the
statement per each jdbc session before doing
actual load. The default value is ''.
NUM_SPLITS_PER_RANK
: Optional: number of splits
for reading data per rank. Default will be
external_file_reader_num_tasks. The default
value is ''.
NUM_TASKS_PER_RANK
: Optional: number of tasks
for reading data per rank. Default will be
external_file_reader_num_tasks
PRIMARY_KEYS
: Optional: comma separated list of
column names, to set as primary keys, when not
specified in the type. The default value is ''.
SHARD_KEYS
: Optional: comma separated list of
column names, to set as primary keys, when not
specified in the type. The default value is ''.
SUBSCRIBE
: Continuously poll the data source to
check for new data and load it into the table.
Supported values:
The default value is FALSE
.
TRUNCATE_TABLE
: If set to TRUE
, truncates the table specified by tableName
prior to loading the data.
Supported values:
The default value is FALSE
.
REMOTE_QUERY
: Remote SQL query from which data
will be sourced
REMOTE_QUERY_ORDER_BY
: Name of column to be
used for splitting the query into multiple
sub-queries using ordering of given column. The
default value is ''.
REMOTE_QUERY_FILTER_COLUMN
: Name of column to
be used for splitting the query into multiple
sub-queries using the data distribution of given
column. The default value is ''.
REMOTE_QUERY_INCREASING_COLUMN
: Column on
subscribed remote query result that will
increase for new records (e.g., TIMESTAMP). The
default value is ''.
REMOTE_QUERY_PARTITION_COLUMN
: Alias name for
remote_query_filter_column. The default value is
''.
TRUNCATE_STRINGS
: If set to TRUE
, truncate string values that are longer
than the column's type size.
Supported values:
The default value is FALSE
.
UPDATE_ON_EXISTING_PK
: Specifies the record
collision policy for inserting into a table with
a primary key. If set to TRUE
, any existing table record with primary
key values that match those of a record being
inserted will be replaced by that new record
(the new data will be "upserted"). If set to
FALSE
, any existing table record with primary
key values that match those of a record being
inserted will remain unchanged, while the new
record will be rejected and the error handled as
determined by IGNORE_EXISTING_PK
& ERROR_HANDLING
. If the specified table does
not have a primary key, then this option has no
effect.
Supported values:
TRUE
: Upsert new records when primary
keys match existing records
FALSE
: Reject new records when primary
keys match existing records
FALSE
.
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public InsertRecordsRandomResponse insertRecordsRandom(InsertRecordsRandomRequest request) throws GPUdbException
This operation is synchronous, meaning that a response will not be returned until all random records are fully available.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public InsertRecordsRandomResponse insertRecordsRandom(String tableName, long count, Map<String,Map<String,Double>> options) throws GPUdbException
This operation is synchronous, meaning that a response will not be returned until all random records are fully available.
tableName
- Table to which random records will be added, in
[schema_name.]table_name format, using standard name resolution rules. Must be an
existing table, not a view.count
- Number of records to generate.options
- Optional parameter to pass in specifications for the
randomness of the values. This map is different from
the *options* parameter of most other endpoints in that
it is a map of string to map of string to doubles, while
most others are maps of string to string. In this map,
the top level keys represent which column's parameters
are being specified, while the internal keys represents
which parameter is being specified. These parameters
take on different meanings depending on the type of the
column. Below follows a more detailed description of
the map:
SEED
: If provided, the internal random number
generator will be initialized with the given
value. The minimum is 0. This allows for the
same set of random numbers to be generated
across invocation of this endpoint in case the
user wants to repeat the test. Since options
, is a map of maps, we need an internal
map to provide the seed value. For example, to
pass 100 as the seed value through this
parameter, you need something equivalent to:
'options' = {'seed': { 'value': 100 } }.
VALUE
: The seed value to use
ALL
: This key indicates that the specifications
relayed in the internal map are to be applied to
all columns of the records.
MIN
: For numerical columns, the minimum
of the generated values is set to this
value. Default is -99999. For point,
shape, and track columns, min for
numeric 'x' and 'y' columns needs to be
within [-180, 180] and [-90, 90],
respectively. The default minimum
possible values for these columns in
such cases are -180.0 and -90.0. For the
'TIMESTAMP' column, the default minimum
corresponds to Jan 1, 2010. For string
columns, the minimum length of the
randomly generated strings is set to
this value (default is 0). If both
minimum and maximum are provided,
minimum must be less than or equal to
max. If the min is outside the accepted
ranges for strings columns and 'x' and
'y' columns for point/shape/track, then
those parameters will not be set;
however, an error will not be thrown in
such a case. It is the responsibility of
the user to use the ALL
parameter judiciously.
MAX
: For numerical columns, the maximum
of the generated values is set to this
value. Default is 99999. For point,
shape, and track columns, max for
numeric 'x' and 'y' columns needs to be
within [-180, 180] and [-90, 90],
respectively. The default minimum
possible values for these columns in
such cases are 180.0 and 90.0. For
string columns, the maximum length of
the randomly generated strings. If both
minimum and maximum are provided, *max*
must be greater than or equal to *min*.
If the *max* is outside the accepted
ranges for strings columns and 'x' and
'y' columns for point/shape/track, then
those parameters will not be set;
however, an error will not be thrown in
such a case. It is the responsibility of
the user to use the ALL
parameter judiciously.
INTERVAL
: If specified, generate values
for all columns evenly spaced with the
given interval value. If a max value is
specified for a given column the data is
randomly generated between min and max
and decimated down to the interval. If
no max is provided the data is linerally
generated starting at the minimum value
(instead of generating random data). For
non-decimated string-type columns the
interval value is ignored. Instead the
values are generated following the
pattern: 'attrname_creationIndex#', i.e.
the column name suffixed with an
underscore and a running counter
(starting at 0). For string types with
limited size (eg char4) the prefix is
dropped. No nulls will be generated for
nullable columns.
NULL_PERCENTAGE
: If specified, then
generate the given percentage of the
count as nulls for all nullable columns.
This option will be ignored for
non-nullable columns. The value must be
within the range [0, 1.0]. The default
value is 5% (0.05).
CARDINALITY
: If specified, limit the
randomly generated values to a fixed
set. Not allowed on a column with
interval specified, and is not
applicable to WKT or Track-specific
columns. The value must be greater than
0. This option is disabled by default.
ATTR_NAME
: Use the desired column name in place
of ATTR_NAME
, and set the following parameters for
the column specified. This overrides any
parameter set by ALL
.
MIN
: For numerical columns, the minimum
of the generated values is set to this
value. Default is -99999. For point,
shape, and track columns, min for
numeric 'x' and 'y' columns needs to be
within [-180, 180] and [-90, 90],
respectively. The default minimum
possible values for these columns in
such cases are -180.0 and -90.0. For the
'TIMESTAMP' column, the default minimum
corresponds to Jan 1, 2010. For string
columns, the minimum length of the
randomly generated strings is set to
this value (default is 0). If both
minimum and maximum are provided,
minimum must be less than or equal to
max. If the min is outside the accepted
ranges for strings columns and 'x' and
'y' columns for point/shape/track, then
those parameters will not be set;
however, an error will not be thrown in
such a case. It is the responsibility of
the user to use the ALL
parameter judiciously.
MAX
: For numerical columns, the maximum
of the generated values is set to this
value. Default is 99999. For point,
shape, and track columns, max for
numeric 'x' and 'y' columns needs to be
within [-180, 180] and [-90, 90],
respectively. The default minimum
possible values for these columns in
such cases are 180.0 and 90.0. For
string columns, the maximum length of
the randomly generated strings. If both
minimum and maximum are provided, *max*
must be greater than or equal to *min*.
If the *max* is outside the accepted
ranges for strings columns and 'x' and
'y' columns for point/shape/track, then
those parameters will not be set;
however, an error will not be thrown in
such a case. It is the responsibility of
the user to use the ALL
parameter judiciously.
INTERVAL
: If specified, generate values
for all columns evenly spaced with the
given interval value. If a max value is
specified for a given column the data is
randomly generated between min and max
and decimated down to the interval. If
no max is provided the data is linerally
generated starting at the minimum value
(instead of generating random data). For
non-decimated string-type columns the
interval value is ignored. Instead the
values are generated following the
pattern: 'attrname_creationIndex#', i.e.
the column name suffixed with an
underscore and a running counter
(starting at 0). For string types with
limited size (eg char4) the prefix is
dropped. No nulls will be generated for
nullable columns.
NULL_PERCENTAGE
: If specified and if
this column is nullable, then generate
the given percentage of the count as
nulls. This option will result in an
error if the column is not nullable.
The value must be within the range [0,
1.0]. The default value is 5% (0.05).
CARDINALITY
: If specified, limit the
randomly generated values to a fixed
set. Not allowed on a column with
interval specified, and is not
applicable to WKT or Track-specific
columns. The value must be greater than
0. This option is disabled by default.
TRACK_LENGTH
: This key-map pair is only valid
for track data sets (an error is thrown
otherwise). No nulls would be generated for
nullable columns.
MIN
: Minimum possible length for
generated series; default is 100 records
per series. Must be an integral value
within the range [1, 500]. If both min
and max are specified, min must be less
than or equal to max. The minimum
allowed value is 1. The maximum allowed
value is 500.
MAX
: Maximum possible length for
generated series; default is 500 records
per series. Must be an integral value
within the range [1, 500]. If both min
and max are specified, max must be
greater than or equal to min. The
minimum allowed value is 1. The maximum
allowed value is 500.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public InsertSymbolResponse insertSymbol(InsertSymbolRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public InsertSymbolResponse insertSymbol(String symbolId, String symbolFormat, ByteBuffer symbolData, Map<String,String> options) throws GPUdbException
symbolId
- The id of the symbol being added. This is the same id
that should be in the 'SYMBOLCODE' column for objects
using this symbolsymbolFormat
- Specifies the symbol format. Must be either 'svg'
or 'svg_path'.
Supported values:
symbolData
- The actual symbol data. If symbolFormat
is
'svg' then this should be the raw bytes representing
an svg file. If symbolFormat
is svg path then
this should be an svg path string, for example:
'M25.979,12.896,5.979,12.896,5.979,19.562,25.979,19.562z'options
- Optional parameters.
COLOR
: If symbolFormat
is 'svg' this is
ignored. If symbolFormat
is 'svg_path'
then this option specifies the color (in RRGGBB
hex format) of the path. For example, to have
the path rendered in red, used 'FF0000'. If
'color' is not provided then '00FF00' (i.e.
green) is used by default.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public KillProcResponse killProc(KillProcRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public KillProcResponse killProc(String runId, Map<String,String> options) throws GPUdbException
runId
- The run ID of a running proc instance. If a proc with a
matching run ID is not found or the proc instance has
already completed, no procs will be killed. If not
specified, all running proc instances will be killed. The
default value is ''.options
- Optional parameters.
RUN_TAG
: If runId
is specified, kill
the proc instance that has a matching run ID and
a matching run tag that was provided to executeProc
. If runId
is not
specified, kill the proc instance(s) where a
matching run tag was provided to executeProc
. The default value is
''.
CLEAR_EXECUTE_AT_STARTUP
: If TRUE
, kill and remove the instance of the proc
matching the auto-start run ID that was created
to run when the database is started. The
auto-start run ID was returned from executeProc
and can be retrieved
using showProc
.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ListGraphResponse listGraph(ListGraphRequest request) throws GPUdbException
GPUdbException
public ListGraphResponse listGraph(String graphName, Map<String,String> options) throws GPUdbException
GPUdbException
public LockTableResponse lockTable(LockTableRequest request) throws GPUdbException
lockType
of
READ_WRITE
, indicating all operations are permitted. A user may
request a READ_ONLY
or a WRITE_ONLY
lock, after which only read or write operations, respectively, are
permitted on the table until the lock is removed. When lockType
is NO_ACCESS
then
no operations are permitted on the table. The lock status can be
queried by setting lockType
to STATUS
.request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public LockTableResponse lockTable(String tableName, String lockType, Map<String,String> options) throws GPUdbException
lockType
of READ_WRITE
,
indicating all operations are permitted. A user may request a READ_ONLY
or a
WRITE_ONLY
lock, after which only read or write operations,
respectively, are permitted on the table until the lock is removed.
When lockType
is NO_ACCESS
then
no operations are permitted on the table. The lock status can be
queried by setting lockType
to STATUS
.tableName
- Name of the table to be locked, in
[schema_name.]table_name format, using standard name resolution rules. It must be a
currently existing table or view.lockType
- The type of lock being applied to the table. Setting it
to STATUS
will return the current lock status of the
table without changing it.
Supported values:
STATUS
: Show locked status
NO_ACCESS
: Allow no read/write operations
READ_ONLY
: Allow only read operations
WRITE_ONLY
: Allow only write operations
READ_WRITE
: Allow all read/write operations
STATUS
.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public MatchGraphResponse matchGraph(MatchGraphRequest request) throws GPUdbException
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public MatchGraphResponse matchGraph(String graphName, List<String> samplePoints, String solveMethod, String solutionTable, Map<String,String> options) throws GPUdbException
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.
graphName
- Name of the underlying geospatial graph resource to
match to using samplePoints
.samplePoints
- Sample points used to match to an underlying
geospatial graph. Sample points must be specified
using identifiers; identifiers are
grouped as combinations. Identifiers can be
used with: existing column names, e.g.,
'table.column AS SAMPLE_X'; expressions, e.g.,
'ST_MAKEPOINT(table.x, table.y) AS
SAMPLE_WKTPOINT'; or constant values, e.g., '{1, 2,
10} AS SAMPLE_TRIPID'.solveMethod
- The type of solver to use for graph matching.
Supported values:
MARKOV_CHAIN
: Matches samplePoints
to the graph using the Hidden Markov Model
(HMM)-based method, which conducts a
range-tree closest-edge search to find the
best combinations of possible road segments
(NUM_SEGMENTS
) for each sample point to
create the best route. The route is secured
one point at a time while looking ahead
CHAIN_WIDTH
number of points, so the
prediction is corrected after each point.
This solution type is the most accurate but
also the most computationally intensive.
Related options: NUM_SEGMENTS
and CHAIN_WIDTH
.
MATCH_OD_PAIRS
: Matches samplePoints
to find the most probable path
between origin and destination pairs with
cost constraints.
MATCH_SUPPLY_DEMAND
: Matches samplePoints
to optimize scheduling
multiple supplies (trucks) with varying
sizes to varying demand sites with varying
capacities per depot. Related options:
PARTIAL_LOADING
and MAX_COMBINATIONS
.
MATCH_BATCH_SOLVES
: Matches samplePoints
source and destination pairs
for the shortest path solves in batch mode.
MATCH_LOOPS
: Matches closed loops (Eulerian
paths) originating and ending at each graph
node within min and max hops (levels).
MATCH_CHARGING_STATIONS
: Matches an optimal
path across a number of ev-charging stations
between source and target locations.
MATCH_SIMILARITY
: Matches the intersection
set(s) by computing the Jaccard similarity
score between node pairs.
MATCH_PICKUP_DROPOFF
: Matches the pickups
and dropoffs by optimizing the total trip
costs
MATCH_CLUSTERS
: Matches the graph nodes
with a cluster index using Louvain
clustering algorithm
MATCH_PATTERN
: Matches a pattern in the
graph
MATCH_EMBEDDING
: Creates vector node
embeddings
MATCH_ISOCHRONE
: Solves for isochrones for
a set of input sources
MARKOV_CHAIN
.solutionTable
- The name of the table used to store the results,
in [schema_name.]table_name format, using standard
name resolution rules and
meeting table naming criteria. This
table contains a track of geospatial points for
the matched portion of the graph, a track ID, and
a score value. Also outputs a details table
containing a trip ID (that matches the track ID),
the latitude/longitude pair, the timestamp the
point was recorded at, and an edge ID
corresponding to the matched road segment. Must
not be an existing table of the same name. The
default value is ''.options
- Additional parameters.
GPS_NOISE
: GPS noise value (in meters) to
remove redundant sample points. Use -1 to
disable noise reduction. The default value
accounts for 95% of point variation (+ or -5
meters). The default value is '5.0'.
NUM_SEGMENTS
: Maximum number of potentially
matching road segments for each sample point.
For the MARKOV_CHAIN
solver, the default is 3. The
default value is '3'.
SEARCH_RADIUS
: Maximum search radius used when
snapping sample points onto potentially matching
surrounding segments. The default value
corresponds to approximately 100 meters. The
default value is '0.001'.
CHAIN_WIDTH
: For the MARKOV_CHAIN
solver only. Length of the sample
points lookahead window within the Markov
kernel; the larger the number, the more accurate
the solution. The default value is '9'.
SOURCE
: Optional WKT starting point from samplePoints
for the solver. The default
behavior for the endpoint is to use time to
determine the starting point. The default value
is 'POINT NULL'.
DESTINATION
: Optional WKT ending point from
samplePoints
for the solver. The default
behavior for the endpoint is to use time to
determine the destination point. The default
value is 'POINT NULL'.
PARTIAL_LOADING
: For the MATCH_SUPPLY_DEMAND
solver only. When false
(non-default), trucks do not off-load at the
demand (store) side if the remainder is less
than the store's need.
Supported values:
TRUE
: Partial off-loading at multiple
store (demand) locations
FALSE
: No partial off-loading allowed
if supply is less than the store's
demand.
TRUE
.
MAX_COMBINATIONS
: For the MATCH_SUPPLY_DEMAND
solver only. This is the
cutoff for the number of generated combinations
for sequencing the demand locations - can
increase this up to 2M. The default value is
'10000'.
MAX_SUPPLY_COMBINATIONS
: For the MATCH_SUPPLY_DEMAND
solver only. This is the
cutoff for the number of generated combinations
for sequencing the supply locations if/when
'permute_supplies' is true. The default value is
'10000'.
LEFT_TURN_PENALTY
: This will add an additonal
weight over the edges labelled as 'left turn' if
the 'add_turn' option parameter of the createGraph
was invoked at
graph creation. The default value is '0.0'.
RIGHT_TURN_PENALTY
: This will add an additonal
weight over the edges labelled as' right turn'
if the 'add_turn' option parameter of the createGraph
was invoked at
graph creation. The default value is '0.0'.
INTERSECTION_PENALTY
: This will add an
additonal weight over the edges labelled as
'intersection' if the 'add_turn' option
parameter of the createGraph
was invoked at
graph creation. The default value is '0.0'.
SHARP_TURN_PENALTY
: This will add an additonal
weight over the edges labelled as 'sharp turn'
or 'u-turn' if the 'add_turn' option parameter
of the createGraph
was
invoked at graph creation. The default value is
'0.0'.
AGGREGATED_OUTPUT
: For the MATCH_SUPPLY_DEMAND
solver only. When it is
true (default), each record in the output table
shows a particular truck's scheduled cumulative
round trip path (MULTILINESTRING) and the
corresponding aggregated cost. Otherwise, each
record shows a single scheduled truck route
(LINESTRING) towards a particular demand
location (store id) with its corresponding cost.
The default value is 'true'.
OUTPUT_TRACKS
: For the MATCH_SUPPLY_DEMAND
solver only. When it is
true (non-default), the output will be in tracks
format for all the round trips of each truck in
which the timestamps are populated directly from
the edge weights starting from their originating
depots. The default value is 'false'.
MAX_TRIP_COST
: For the MATCH_SUPPLY_DEMAND
and MATCH_PICKUP_DROPOFF
solvers only. If this
constraint is greater than zero (default) then
the trucks/rides will skip travelling from one
demand/pick location to another if the cost
between them is greater than this number
(distance or time). Zero (default) value means
no check is performed. The default value is
'0.0'.
FILTER_FOLDING_PATHS
: For the MARKOV_CHAIN
solver only. When true
(non-default), the paths per sequence
combination is checked for folding over patterns
and can significantly increase the execution
time depending on the chain width and the number
of gps samples.
Supported values:
The default value is FALSE
.
UNIT_UNLOADING_COST
: For the MATCH_SUPPLY_DEMAND
solver only. The unit cost
per load amount to be delivered. If this value
is greater than zero (default) then the
additional cost of this unit load multiplied by
the total dropped load will be added over to the
trip cost to the demand location. The default
value is '0.0'.
MAX_NUM_THREADS
: For the MARKOV_CHAIN
solver only. If specified (greater
than zero), the maximum number of threads will
not be greater than the specified value. It can
be lower due to the memory and the number cores
available. Default value of zero allows the
algorithm to set the maximal number of threads
within these constraints. The default value is
'0'.
SERVICE_LIMIT
: For the MATCH_SUPPLY_DEMAND
solver only. If specified
(greater than zero), any supply actor's total
service cost (distance or time) will be limited
by the specified value including multiple rounds
(if set). The default value is '0.0'.
ENABLE_REUSE
: For the MATCH_SUPPLY_DEMAND
solver only. If specified
(true), all supply actors can be scheduled for
second rounds from their originating depots.
Supported values:
TRUE
: Allows reusing supply actors
(trucks, e.g.) for scheduling again.
FALSE
: Supply actors are scheduled only
once from their depots.
FALSE
.
MAX_STOPS
: For the MATCH_SUPPLY_DEMAND
solver only. If specified
(greater than zero), a supply actor (truck) can
at most have this many stops (demand locations)
in one round trip. Otherwise, it is unlimited.
If 'enable_truck_reuse' is on, this condition
will be applied separately at each round trip
use of the same truck. The default value is '0'.
SERVICE_RADIUS
: For the MATCH_SUPPLY_DEMAND
and MATCH_PICKUP_DROPOFF
solvers only. If specified
(greater than zero), it filters the
demands/picks outside this radius centered
around the supply actor/ride's originating
location (distance or time). The default value
is '0.0'.
PERMUTE_SUPPLIES
: For the MATCH_SUPPLY_DEMAND
solver only. If specified
(true), supply side actors are permuted for the
demand combinations during msdo optimization -
note that this option increases optimization
time significantly - use of 'max_combinations'
option is recommended to prevent prohibitively
long runs.
Supported values:
TRUE
: Generates sequences over supply
side permutations if total supply is
less than twice the total demand
FALSE
: Permutations are not performed,
rather a specific order of supplies
based on capacity is computed
TRUE
.
BATCH_TSM_MODE
: For the MATCH_SUPPLY_DEMAND
solver only. When enabled,
it sets the number of visits on each demand
location by a single salesman at each trip is
considered to be (one) 1, otherwise there is no
bound.
Supported values:
TRUE
: Sets only one visit per demand
location by a salesman (tsm mode)
FALSE
: No preset limit (usual msdo
mode)
FALSE
.
ROUND_TRIP
: For the MATCH_SUPPLY_DEMAND
solver only. When enabled,
the supply will have to return back to the
origination location.
Supported values:
TRUE
: The optimization is done for
trips in round trip manner always
returning to originating locations
FALSE
: Supplies do not have to come
back to their originating locations in
their routes. The routes are considered
finished at the final dropoff.
TRUE
.
NUM_CYCLES
: For the MATCH_CLUSTERS
solver only. Terminates the
cluster exchange iterations across 2-step-cycles
(outer loop) when quality does not improve
during iterations. The default value is '10'.
NUM_LOOPS_PER_CYCLE
: For the MATCH_CLUSTERS
and MATCH_EMBEDDING
solvers only. Terminates the
cluster exchanges within the first step
iterations of a cycle (inner loop) unless
convergence is reached. The default value is
'10'.
NUM_OUTPUT_CLUSTERS
: For the MATCH_CLUSTERS
solver only. Limits the output
to the top 'num_output_clusters' clusters based
on density. Default value of zero outputs all
clusters. The default value is '0'.
MAX_NUM_CLUSTERS
: For the MATCH_CLUSTERS
and MATCH_EMBEDDING
solvers only. If set (value
greater than zero), it terminates when the
number of clusters goes below than this number.
For embedding solver the default is 8. The
default value is '0'.
CLUSTER_QUALITY_METRIC
: For the MATCH_CLUSTERS
solver only. The quality metric
for Louvain modularity optimization solver.
Supported values:
GIRVAN
: Uses the Newman Girvan quality
metric for cluster solver
SPECTRAL
: Applies recursive spectral
bisection (RSB) partitioning solver
GIRVAN
.
RESTRICTED_TYPE
: For the MATCH_SUPPLY_DEMAND
solver only. Optimization
is performed by restricting routes labeled by
'MSDO_ODDEVEN_RESTRICTED' only for this supply
actor (truck) type.
Supported values:
ODD
: Applies odd/even rule restrictions
to odd tagged vehicles.
EVEN
: Applies odd/even rule
restrictions to even tagged vehicles.
NONE
: Does not apply odd/even rule
restrictions to any vehicles.
NONE
.
SERVER_ID
: Indicates which graph server(s) to
send the request to. Default is to send to the
server, amongst those containing the
corresponding graph, that has the most
computational bandwidth. The default value is
''.
INVERSE_SOLVE
: For the MATCH_BATCH_SOLVES
solver only. Solves
source-destination pairs using inverse shortest
path solver.
Supported values:
The default value is FALSE
.
MIN_LOOP_LEVEL
: For the MATCH_LOOPS
solver only. Finds closed loops
around each node deducible not less than this
minimal hop (level) deep. The default value is
'0'.
MAX_LOOP_LEVEL
: For the MATCH_LOOPS
solver only. Finds closed loops
around each node deducible not more than this
maximal hop (level) deep. The default value is
'5'.
SEARCH_LIMIT
: For the MATCH_LOOPS
solver only. Searches within this
limit of nodes per vertex to detect loops. The
value zero means there is no limit. The default
value is '10000'.
OUTPUT_BATCH_SIZE
: For the MATCH_LOOPS
solver only. Uses this value as the
batch size of the number of loops in
flushing(inserting) to the output table. The
default value is '1000'.
CHARGING_CAPACITY
: For the MATCH_CHARGING_STATIONS
solver only. This is
the maximum ev-charging capacity of a vehicle
(distance in meters or time in seconds depending
on the unit of the graph weights). The default
value is '300000.0'.
CHARGING_CANDIDATES
: For the MATCH_CHARGING_STATIONS
solver only. Solver
searches for this many number of stations
closest around each base charging location found
by capacity. The default value is '10'.
CHARGING_PENALTY
: For the MATCH_CHARGING_STATIONS
solver only. This is
the penalty for full charging. The default value
is '30000.0'.
MAX_HOPS
: For the MATCH_SIMILARITY
and MATCH_EMBEDDING
solvers only. Searches within
this maximum hops for source and target node
pairs to compute the Jaccard scores. The default
value is '3'.
TRAVERSAL_NODE_LIMIT
: For the MATCH_SIMILARITY
solver only. Limits the
traversal depth if it reaches this many number
of nodes. The default value is '1000'.
PAIRED_SIMILARITY
: For the MATCH_SIMILARITY
solver only. If true, it
computes Jaccard score between each pair,
otherwise it will compute Jaccard from the
intersection set between the source and target
nodes.
Supported values:
The default value is TRUE
.
FORCE_UNDIRECTED
: For the MATCH_PATTERN
and MATCH_EMBEDDING
solvers only. Pattern matching
will be using both pattern and graph as
undirected if set to true.
Supported values:
The default value is FALSE
.
MAX_VECTOR_DIMENSION
: For the MATCH_EMBEDDING
solver only. Limits the number
of dimensions in node vector embeddings. The
default value is '1000'.
OPTIMIZE_EMBEDDING_WEIGHTS
: For the MATCH_EMBEDDING
solvers only. Solves to find
the optimal weights per sub feature in vector
emdeddings.
Supported values:
The default value is FALSE
.
EMBEDDING_WEIGHTS
: For the MATCH_EMBEDDING
solver only. User specified
weights per sub feature in vector embeddings.
The string contains the comma separated float
values for each sub-feature in the vector space.
These values will ONLY be used if
'optimize_embedding_weights' is false. The
default value is '1.0,1.0,1.0,1.0'.
OPTIMIZATION_SAMPLING_SIZE
: For the MATCH_EMBEDDING
solver only. Sets the number of
random nodes from the graph for solving the
weights using stochastic gradient descent. The
default value is '1000'.
OPTIMIZATION_MAX_ITERATIONS
: For the MATCH_EMBEDDING
solver only. When the
iterations (epochs) for the convergence of the
stochastic gradient descent algorithm reaches
this number it bails out unless relative error
between consecutive iterations is below the
'optimization_error_tolerance' option. The
default value is '1000'.
OPTIMIZATION_ERROR_TOLERANCE
: For the MATCH_EMBEDDING
solver only. When the relative
error between all of the weights' consecutive
iterations falls below this threshold the
optimization cycle is interrupted unless the
number of iterations reaches the limit set by
the option 'max_optimization_iterations'. The
default value is '0.001'.
OPTIMIZATION_ITERATION_RATE
: For the MATCH_EMBEDDING
solver only. It is otherwise
known as the learning rate, which is the
proportionality constant in fornt of the
gradient term in successive iterations. The
default value is '0.3'.
MAX_RADIUS
: For the MATCH_ISOCHRONE
solver only. Sets the maximal
reachability limmit for computing isochrones.
Zero means no limit. The default value is '0.0'.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public MergeRecordsResponse mergeRecords(MergeRecordsRequest request) throws GPUdbException
tableName
), and
insert all records from source tables (specified by sourceTableNames
) based on the field mapping information (specified by
fieldMaps
).
For merge records details and examples, see Merge Records. For limitations, see Merge Records Limitations and Cautions.
The field map (specified by fieldMaps
) holds
the user-specified maps of target table column names to source table
columns. The array of fieldMaps
must
match one-to-one with the sourceTableNames
, e.g., there's a map present in fieldMaps
for
each table listed in sourceTableNames
.
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public MergeRecordsResponse mergeRecords(String tableName, List<String> sourceTableNames, List<Map<String,String>> fieldMaps, Map<String,String> options) throws GPUdbException
tableName
), and
insert all records from source tables (specified by sourceTableNames
) based on the field mapping information (specified by
fieldMaps
).
For merge records details and examples, see Merge Records. For limitations, see Merge Records Limitations and Cautions.
The field map (specified by fieldMaps
) holds the user-specified
maps of target table column names to source table columns. The array of
fieldMaps
must match one-to-one with the sourceTableNames
, e.g., there's a map present in fieldMaps
for
each table listed in sourceTableNames
.
tableName
- The name of the new result table for the records to be
merged into, in [schema_name.]table_name format, using
standard name resolution rules and meeting table naming criteria. Must NOT be
an existing table.sourceTableNames
- The list of names of source tables to get the
records from, each in [schema_name.]table_name
format, using standard name resolution rules. Must
be existing table names.fieldMaps
- Contains a list of source/target column mappings, one
mapping for each source table listed in sourceTableNames
being merged into the target table
specified by tableName
. Each mapping contains
the target column names (as keys) that the data in the
mapped source columns or column expressions (as values) will be
merged into. All of the source columns being merged
into a given target column must match in type, as that
type will determine the type of the new target column.options
- Optional parameters.
CREATE_TEMP_TABLE
: If TRUE
, a unique temporary table name will be
generated in the sys_temp schema and used in
place of tableName
. If PERSIST
is FALSE
, then this is always allowed even if the
caller does not have permission to create
tables. The generated name is returned in QUALIFIED_TABLE_NAME
.
Supported values:
The default value is FALSE
.
COLLECTION_NAME
: [DEPRECATED--please specify
the containing schema for the merged table as
part of tableName
and use createSchema
to
create the schema if non-existent] Name of a
schema for the newly created merged table
specified by tableName
.
IS_REPLICATED
: Indicates the distribution scheme for the
data of the merged table specified in tableName
. If true, the table will be replicated. If false, the
table will be randomly sharded.
Supported values:
The default value is FALSE
.
TTL
: Sets the TTL of the merged table
specified in tableName
.
PERSIST
: If TRUE
, then the table specified in tableName
will be persisted and will not expire
unless a TTL
is specified. If FALSE
, then the table will be an in-memory
table and will expire unless a TTL
is specified otherwise.
Supported values:
The default value is TRUE
.
CHUNK_SIZE
: Indicates the number of records per
chunk to be used for the merged table specified
in tableName
.
CHUNK_COLUMN_MAX_MEMORY
: Indicates the target
maximum data size for each column in a chunk to
be used for the merged table specified in tableName
.
CHUNK_MAX_MEMORY
: Indicates the target maximum
data size for all columns in a chunk to be used
for the merged table specified in tableName
.
VIEW_ID
: view this result table is part of. The
default value is ''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ModifyGraphResponse modifyGraph(ModifyGraphRequest request) throws GPUdbException
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, and Graph REST Tutorial before using this endpoint.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ModifyGraphResponse modifyGraph(String graphName, List<String> nodes, List<String> edges, List<String> weights, List<String> restrictions, Map<String,String> options) throws GPUdbException
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, and Graph REST Tutorial before using this endpoint.
graphName
- Name of the graph resource to modify.nodes
- Nodes with which to update existing nodes
in graph
specified by graphName
. Review Nodes for more information. Nodes must
be specified using identifiers; identifiers are grouped as
combinations. Identifiers can be used
with existing column names, e.g., 'table.column AS
NODE_ID', expressions, e.g., 'ST_MAKEPOINT(column1,
column2) AS NODE_WKTPOINT', or raw values, e.g., '{9, 10,
11} AS NODE_ID'. If using raw values in an identifier
combination, the number of values specified must match
across the combination. Identifier combination(s) do not
have to match the method used to create the graph, e.g.,
if column names were specified to create the graph,
expressions or raw values could also be used to modify the
graph.edges
- Edges with which to update existing edges
in graph
specified by graphName
. Review Edges for more information. Edges must
be specified using identifiers; identifiers are grouped as
combinations. Identifiers can be used
with existing column names, e.g., 'table.column AS
EDGE_ID', expressions, e.g., 'SUBSTR(column, 1, 6) AS
EDGE_NODE1_NAME', or raw values, e.g., "{'family',
'coworker'} AS EDGE_LABEL". If using raw values in an
identifier combination, the number of values specified
must match across the combination. Identifier
combination(s) do not have to match the method used to
create the graph, e.g., if column names were specified to
create the graph, expressions or raw values could also be
used to modify the graph.weights
- Weights with which to update existing weights
in
graph specified by graphName
. Review Weights for more information. Weights
must be specified using identifiers; identifiers are grouped
as combinations. Identifiers can be used
with existing column names, e.g., 'table.column AS
WEIGHTS_EDGE_ID', expressions, e.g., 'ST_LENGTH(wkt) AS
WEIGHTS_VALUESPECIFIED', or raw values, e.g., '{4, 15}
AS WEIGHTS_VALUESPECIFIED'. If using raw values in an
identifier combination, the number of values specified
must match across the combination. Identifier
combination(s) do not have to match the method used to
create the graph, e.g., if column names were specified
to create the graph, expressions or raw values could
also be used to modify the graph.restrictions
- Restrictions with which to update existing restrictions
in graph specified by graphName
. Review Restrictions for more
information. Restrictions must be specified using
identifiers; identifiers are
grouped as combinations. Identifiers can be
used with existing column names, e.g.,
'table.column AS RESTRICTIONS_EDGE_ID',
expressions, e.g., 'column/2 AS
RESTRICTIONS_VALUECOMPARED', or raw values, e.g.,
'{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'. If
using raw values in an identifier combination, the
number of values specified must match across the
combination. Identifier combination(s) do not have
to match the method used to create the graph, e.g.,
if column names were specified to create the graph,
expressions or raw values could also be used to
modify the graph.options
- Optional parameters.
RESTRICTION_THRESHOLD_VALUE
: Value-based
restriction comparison. Any node or edge with a
RESTRICTIONS_VALUECOMPARED value greater than
the RESTRICTION_THRESHOLD_VALUE
will not be
included in the graph.
EXPORT_CREATE_RESULTS
: If set to TRUE
, returns the graph topology in the
response as arrays.
Supported values:
The default value is FALSE
.
ENABLE_GRAPH_DRAW
: If set to TRUE
, adds a 'EDGE_WKTLINE' column identifier
to the specified GRAPH_TABLE
so the graph can be viewed via WMS;
for social and non-geospatial graphs, the
'EDGE_WKTLINE' column identifier will be
populated with spatial coordinates derived from
a flattening layout algorithm so the graph can
still be viewed.
Supported values:
The default value is FALSE
.
SAVE_PERSIST
: If set to TRUE
, the graph will be saved in the persist
directory (see the config reference for more
information). If set to FALSE
, the graph will be removed when the graph
server is shutdown.
Supported values:
The default value is FALSE
.
ADD_TABLE_MONITOR
: Adds a table monitor to
every table used in the creation of the graph;
this table monitor will trigger the graph to
update dynamically upon inserts to the source
table(s). Note that upon database restart, if
SAVE_PERSIST
is also set to TRUE
, the graph will be fully reconstructed and
the table monitors will be reattached. For more
details on table monitors, see createTableMonitor
.
Supported values:
The default value is FALSE
.
GRAPH_TABLE
: If specified, the created graph is
also created as a table with the given name, in
[schema_name.]table_name format, using standard
name resolution rules and
meeting table naming criteria. This
table will have the following identifier
columns: 'EDGE_ID', 'EDGE_NODE1_ID',
'EDGE_NODE2_ID'. If left blank, no table is
created. The default value is ''.
REMOVE_LABEL_ONLY
: When RESTRICTIONS on labeled
entities requested, if set to true this will NOT
delete the entity but only the label associated
with the entity. Otherwise (default), it'll
delete the label AND the entity.
Supported values:
The default value is FALSE
.
ADD_TURNS
: Adds dummy 'pillowed' edges around
intersection nodes where there are more than
three edges so that additional weight penalties
can be imposed by the solve endpoints.
(increases the total number of edges).
Supported values:
The default value is FALSE
.
TURN_ANGLE
: Value in degrees modifies the
thresholds for attributing right, left, sharp
turns, and intersections. It is the vertical
deviation angle from the incoming edge to the
intersection node. The larger the value, the
larger the threshold for sharp turns and
intersections; the smaller the value, the larger
the threshold for right and left turns; 0 <
turn_angle < 90. The default value is '60'.
USE_RTREE
: Use an range tree structure to
accelerate and improve the accuracy of snapping,
especially to edges.
Supported values:
The default value is TRUE
.
LABEL_DELIMITER
: If provided the label string
will be split according to this delimiter and
each sub-string will be applied as a separate
label onto the specified edge. The default value
is ''.
ALLOW_MULTIPLE_EDGES
: Multigraph choice;
allowing multiple edges with the same node pairs
if set to true, otherwise, new edges with
existing same node pairs will not be inserted.
Supported values:
The default value is TRUE
.
EMBEDDING_TABLE
: If table exists (should be
generated by the match/graph match_embedding
solver), the vector embeddings for the newly
inserted nodes will be appended into this table.
The default value is ''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public QueryGraphResponse queryGraph(QueryGraphRequest request) throws GPUdbException
createGraph
and returns a list of
adjacent edge(s) or node(s), also known as an adjacency list, depending
on what's been provided to the endpoint; providing edges will return
nodes and providing nodes will return edges.
To determine the node(s) or edge(s) adjacent to a value from a given
column, provide a list of values to queries
. This field
can be populated with column values from any table as long as the type
is supported by the given identifier. See Query Identifiers for more information.
To return the adjacency list in the response, leave adjacencyTable
empty.
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public QueryGraphResponse queryGraph(String graphName, List<String> queries, List<String> restrictions, String adjacencyTable, int rings, Map<String,String> options) throws GPUdbException
createGraph
and returns a list of adjacent edge(s) or node(s), also
known as an adjacency list, depending on what's been provided to the
endpoint; providing edges will return nodes and providing nodes will
return edges.
To determine the node(s) or edge(s) adjacent to a value from a given
column, provide a list of values to queries
. This field can be
populated with column values from any table as long as the type is
supported by the given identifier. See Query Identifiers for more information.
To return the adjacency list in the response, leave adjacencyTable
empty.
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.
graphName
- Name of the graph resource to query.queries
- Nodes or edges to be queried specified using query identifiers. Identifiers can be
used with existing column names, e.g., 'table.column AS
QUERY_NODE_ID', raw values, e.g., '{0, 2} AS
QUERY_NODE_ID', or expressions, e.g.,
'ST_MAKEPOINT(table.x, table.y) AS QUERY_NODE_WKTPOINT'.
Multiple values can be provided as long as the same
identifier is used for all values. If using raw values
in an identifier combination, the number of values
specified must match across the combination.restrictions
- Additional restrictions to apply to the nodes/edges
of an existing graph. Restrictions must be
specified using identifiers; identifiers are
grouped as combinations. Identifiers can be
used with existing column names, e.g.,
'table.column AS RESTRICTIONS_EDGE_ID',
expressions, e.g., 'column/2 AS
RESTRICTIONS_VALUECOMPARED', or raw values, e.g.,
'{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'. If
using raw values in an identifier combination, the
number of values specified must match across the
combination. The default value is an empty List
.adjacencyTable
- Name of the table to store the resulting
adjacencies, in [schema_name.]table_name format,
using standard name resolution rules and
meeting table naming criteria. If left
blank, the query results are instead returned in
the response. If the 'QUERY_TARGET_NODE_LABEL' query identifier is used in
queries
, then two additional columns will
be available: 'PATH_ID' and 'RING_ID'. See Using Labels for more
information. The default value is ''.rings
- Sets the number of rings around the node to query for
adjacency, with '1' being the edges directly attached to
the queried node. Also known as number of hops. For
example, if it is set to '2', the edge(s) directly
attached to the queried node(s) will be returned; in
addition, the edge(s) attached to the node(s) attached to
the initial ring of edge(s) surrounding the queried
node(s) will be returned. If the value is set to '0', any
nodes that meet the criteria in queries
and restrictions
will be returned. This parameter is only
applicable when querying nodes. The default value is 1.options
- Additional parameters.
FORCE_UNDIRECTED
: If set to TRUE
, all inbound edges and outbound edges
relative to the node will be returned. If set to
FALSE
, only outbound edges relative to the node
will be returned. This parameter is only
applicable if the queried graph graphName
is directed and when querying nodes.
Consult Directed Graphs for more
details.
Supported values:
The default value is FALSE
.
LIMIT
: When specified (>0), limits the
number of query results. The size of the nodes
table will be limited by the LIMIT
value. The default value is '0'.
OUTPUT_WKT_PATH
: If true then concatenated wkt
line segments will be added as the WKT column of
the adjacency table.
Supported values:
The default value is FALSE
.
AND_LABELS
: If set to TRUE
, the result of the query has entities that
satisfy all of the target labels, instead of
any.
Supported values:
The default value is FALSE
.
SERVER_ID
: Indicates which graph server(s) to
send the request to. Default is to send to the
server, amongst those containing the
corresponding graph, that has the most
computational bandwidth.
OUTPUT_CHARN_LENGTH
: When specified (>0 and
<=256), limits the number of char length on
the output tables for string based nodes. The
default length is 64. The default value is '64'.
FIND_COMMON_LABELS
: If set to true, for
many-to-many queries or multi-level traversals,
it lists the common labels between the source
and target nodes and edge labels in each path.
Otherwise (zero rings), it'll list all labels of
the node(s) queried.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public RepartitionGraphResponse repartitionGraph(RepartitionGraphRequest request) throws GPUdbException
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public RepartitionGraphResponse repartitionGraph(String graphName, Map<String,String> options) throws GPUdbException
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.
graphName
- Name of the graph resource to rebalance.options
- Optional parameters.
NEW_GRAPH_NAME
: If a non-empty value is
specified, the original graph will be kept
(non-default behaviour) and a new balanced graph
will be created under this given name. When the
value is empty (default), the generated
'balanced' graph will replace the original
'unbalanced' graph under the same graph name.
The default value is ''.
SOURCE_NODE
: The distributed shortest path
solve is run from this source node to all the
nodes in the graph to create balaced partitions
using the iso-distance levels of the solution.
The source node is selected by the rebalance
algorithm automatically (default case when the
value is an empty string). Otherwise, the user
specified node is used as the source. The
default value is ''.
SQL_REQUEST_AVRO_JSON
: The default value is ''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ReserveResourceResponse reserveResource(ReserveResourceRequest request) throws GPUdbException
GPUdbException
public ReserveResourceResponse reserveResource(String component, String name, String action, long bytesRequested, long ownerId, Map<String,String> options) throws GPUdbException
GPUdbException
public RevokePermissionResponse revokePermission(RevokePermissionRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionResponse revokePermission(String principal, String object, String objectType, String permission, Map<String,String> options) throws GPUdbException
principal
- Name of the user or role for which the permission is
being revoked. Must be an existing user or role. The
default value is ''.object
- Name of object permission is being revoked from. It is
recommended to use a fully-qualified name when possible.objectType
- The type of object being revoked.
Supported values:
CONTEXT
: Context
CREDENTIAL
: Credential
DATASINK
: Data Sink
DATASOURCE
: Data Source
DIRECTORY
: KIFS File Directory
GRAPH
: A Graph object
PROC
: UDF Procedure
SCHEMA
: Schema
SQL_PROC
: SQL Procedure
SYSTEM
: System-level access
TABLE
: Database Table
TABLE_MONITOR
: Table monitor
permission
- Permission being revoked.
Supported values:
ADMIN
: Full read/write and administrative
access on the object.
CONNECT
: Connect access on the given data
source or data sink.
CREATE
: Ability to create new objects of
this type.
DELETE
: Delete rows from tables.
EXECUTE
: Ability to Execute the Procedure
object.
INSERT
: Insert access to tables.
READ
: Ability to read, list and use the
object.
UPDATE
: Update access to the table.
USER_ADMIN
: Access to administer users and
roles that do not have system_admin
permission.
WRITE
: Access to write, change and delete
objects.
options
- Optional parameters.
COLUMNS
: Revoke table security from these
columns, comma-separated. The default value is
''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionCredentialResponse revokePermissionCredential(RevokePermissionCredentialRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionCredentialResponse revokePermissionCredential(String name, String permission, String credentialName, Map<String,String> options) throws GPUdbException
name
- Name of the user or role from which the permission will be
revoked. Must be an existing user or role.permission
- Permission to revoke from the user or role.
Supported values:
CREDENTIAL_ADMIN
: Full read/write and
administrative access on the credential.
CREDENTIAL_READ
: Ability to read and use the
credential.
credentialName
- Name of the credential on which the permission
will be revoked. Must be an existing credential,
or an empty string to revoke access on all
credentials.options
- Optional parameters. The default value is an empty
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionDatasourceResponse revokePermissionDatasource(RevokePermissionDatasourceRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionDatasourceResponse revokePermissionDatasource(String name, String permission, String datasourceName, Map<String,String> options) throws GPUdbException
name
- Name of the user or role from which the permission will be
revoked. Must be an existing user or role.permission
- Permission to revoke from the user or role.
Supported values:
datasourceName
- Name of the data source on which the permission
will be revoked. Must be an existing data source,
or an empty string to revoke permission from all
data sources.options
- Optional parameters. The default value is an empty
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionDirectoryResponse revokePermissionDirectory(RevokePermissionDirectoryRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionDirectoryResponse revokePermissionDirectory(String name, String permission, String directoryName, Map<String,String> options) throws GPUdbException
name
- Name of the user or role from which the permission will be
revoked. Must be an existing user or role.permission
- Permission to revoke from the user or role.
Supported values:
DIRECTORY_READ
: For files in the directory,
access to list files, download files, or use
files in server side functions
DIRECTORY_WRITE
: Access to upload files to,
or delete files from, the directory. A user
or role with write access automatically has
read acceess
directoryName
- Name of the KiFS directory to which the permission
revokes accessoptions
- Optional parameters. The default value is an empty
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionProcResponse revokePermissionProc(RevokePermissionProcRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionProcResponse revokePermissionProc(String name, String permission, String procName, Map<String,String> options) throws GPUdbException
name
- Name of the user or role from which the permission will be
revoked. Must be an existing user or role.permission
- Permission to revoke from the user or role.
Supported values:
PROC_ADMIN
: Admin access to the proc.
PROC_EXECUTE
: Execute access to the proc.
procName
- Name of the proc to which the permission grants access.
Must be an existing proc, or an empty string if the
permission grants access to all procs.options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionSystemResponse revokePermissionSystem(RevokePermissionSystemRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionSystemResponse revokePermissionSystem(String name, String permission, Map<String,String> options) throws GPUdbException
name
- Name of the user or role from which the permission will be
revoked. Must be an existing user or role.permission
- Permission to revoke from the user or role.
Supported values:
SYSTEM_ADMIN
: Full access to all data and
system functions.
SYSTEM_USER_ADMIN
: Access to administer
users and roles that do not have system_admin
permission.
SYSTEM_WRITE
: Read and write access to all
tables.
SYSTEM_READ
: Read-only access to all tables.
options
- Optional parameters. The default value is an empty
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionTableResponse revokePermissionTable(RevokePermissionTableRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokePermissionTableResponse revokePermissionTable(String name, String permission, String tableName, Map<String,String> options) throws GPUdbException
name
- Name of the user or role from which the permission will be
revoked. Must be an existing user or role.permission
- Permission to revoke from the user or role.
Supported values:
TABLE_ADMIN
: Full read/write and
administrative access to the table.
TABLE_INSERT
: Insert access to the table.
TABLE_UPDATE
: Update access to the table.
TABLE_DELETE
: Delete access to the table.
TABLE_READ
: Read access to the table.
tableName
- Name of the table to which the permission grants
access, in [schema_name.]table_name format, using
standard name resolution rules. Must be an
existing table, view or schema.options
- Optional parameters.
COLUMNS
: Apply security to these columns,
comma-separated. The default value is ''.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public RevokeRoleResponse revokeRole(RevokeRoleRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public RevokeRoleResponse revokeRole(String role, String member, Map<String,String> options) throws GPUdbException
role
- Name of the role in which membership will be revoked. Must
be an existing role.member
- Name of the user or role that will be revoked membership
in role
. Must be an existing user or role.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowContainerRegistryResponse showContainerRegistry(ShowContainerRegistryRequest request) throws GPUdbException
GPUdbException
public ShowContainerRegistryResponse showContainerRegistry(String registryName, Map<String,String> options) throws GPUdbException
GPUdbException
public ShowCredentialResponse showCredential(ShowCredentialRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowCredentialResponse showCredential(String credentialName, Map<String,String> options) throws GPUdbException
credentialName
- Name of the credential on which to retrieve
information. The name must refer to a currently
existing credential. If '*' is specified,
information about all credentials will be
returned.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowDatasinkResponse showDatasink(ShowDatasinkRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowDatasinkResponse showDatasink(String name, Map<String,String> options) throws GPUdbException
name
- Name of the data sink for which to retrieve information.
The name must refer to a currently existing data sink. If
'*' is specified, information about all data sinks will be
returned.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowDatasourceResponse showDatasource(ShowDatasourceRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowDatasourceResponse showDatasource(String name, Map<String,String> options) throws GPUdbException
name
- Name of the data source for which to retrieve information.
The name must refer to a currently existing data source. If
'*' is specified, information about all data sources will
be returned.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowDirectoriesResponse showDirectories(ShowDirectoriesRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowDirectoriesResponse showDirectories(String directoryName, Map<String,String> options) throws GPUdbException
directoryName
- The KiFS directory name to show. If empty, shows
all directories. The default value is ''.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowEnvironmentResponse showEnvironment(ShowEnvironmentRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowEnvironmentResponse showEnvironment(String environmentName, Map<String,String> options) throws GPUdbException
environmentName
- Name of the environment on which to retrieve
information. The name must refer to a currently
existing environment. If '*' or an empty value
is specified, information about all environments
will be returned. The default value is ''.options
- Optional parameters.
NO_ERROR_IF_NOT_EXISTS
: If TRUE
and if the environment specified in environmentName
does not exist, no error is
returned. If FALSE
and if the environment specified in
environmentName
does not exist, then an
error is returned.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowFilesResponse showFiles(ShowFilesRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowFilesResponse showFiles(List<String> paths, Map<String,String> options) throws GPUdbException
paths
- File paths to show. Each path can be a KiFS directory
name, or a full path to a KiFS file. File paths may
contain wildcard characters after the KiFS directory
delimeter. Accepted wildcard characters are asterisk (*)
to represent any string of zero or more characters, and
question mark (?) to indicate a single character.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowFunctionsResponse showFunctions(ShowFunctionsRequest request) throws GPUdbException
GPUdbException
public ShowFunctionsResponse showFunctions(List<String> names, Map<String,String> options) throws GPUdbException
GPUdbException
public ShowGraphResponse showGraph(ShowGraphRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowGraphResponse showGraph(String graphName, Map<String,String> options) throws GPUdbException
graphName
- Name of the graph on which to retrieve information. If
left as the default value, information about all
graphs is returned. The default value is ''.options
- Optional parameters.
SHOW_ORIGINAL_REQUEST
: If set to TRUE
, the request that was originally used to
create the graph is also returned as JSON.
Supported values:
The default value is TRUE
.
SERVER_ID
: Indicates which graph server(s) to
send the request to. Default is to send to get
information about all the servers.
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowGraphGrammarResponse showGraphGrammar(ShowGraphGrammarRequest request) throws GPUdbException
GPUdbException
public ShowGraphGrammarResponse showGraphGrammar(Map<String,String> options) throws GPUdbException
GPUdbException
public ShowModelResponse showModel(ShowModelRequest request) throws GPUdbException
GPUdbException
public ShowModelResponse showModel(List<String> modelNames, Map<String,String> options) throws GPUdbException
GPUdbException
public ShowProcResponse showProc(ShowProcRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowProcResponse showProc(String procName, Map<String,String> options) throws GPUdbException
procName
- Name of the proc to show information about. If
specified, must be the name of a currently existing
proc. If not specified, information about all procs
will be returned. The default value is ''.options
- Optional parameters.
INCLUDE_FILES
: If set to TRUE
, the files that make up the proc will be
returned. If set to FALSE
, the files will not be returned.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowProcStatusResponse showProcStatus(ShowProcStatusRequest request) throws GPUdbException
executeProc
) and data segment ID
(each invocation of the proc command on a data segment is assigned a
data segment ID).request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowProcStatusResponse showProcStatus(String runId, Map<String,String> options) throws GPUdbException
executeProc
) and data segment ID (each
invocation of the proc command on a data segment is assigned a data
segment ID).runId
- The run ID of a specific proc instance for which the
status will be returned. If a proc with a matching run ID
is not found, the response will be empty. If not
specified, the statuses of all executed proc instances
will be returned. The default value is ''.options
- Optional parameters.
CLEAR_COMPLETE
: If set to TRUE
, if a proc instance has completed (either
successfully or unsuccessfully) then its status
will be cleared and no longer returned in
subsequent calls.
Supported values:
The default value is FALSE
.
RUN_TAG
: If runId
is specified, return
the status for a proc instance that has a
matching run ID and a matching run tag that was
provided to executeProc
. If
runId
is not specified, return statuses
for all proc instances where a matching run tag
was provided to executeProc
.
The default value is ''.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowResourceObjectsResponse showResourceObjects(ShowResourceObjectsRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ShowResourceObjectsResponse showResourceObjects(Map<String,String> options) throws GPUdbException
options
- Optional parameters.
TIERS
: Comma-separated list of tiers to query,
leave blank for all tiers.
EXPRESSION
: An expression to filter the
returned objects. Expression is limited to the
following operators:
=,!=,<,<=,>,>=,+,-,*,AND,OR,LIKE.
For details see Expressions. To use a more
complex expression, query the
ki_catalog.ki_tiered_objects table directly.
ORDER_BY
: Single column to be sorted by as well
as the sort direction, e.g., 'size asc'.
Supported values:
LIMIT
: An integer indicating the maximum number
of results to be returned, per rank, or (-1) to
indicate that the maximum number of results
allowed by the server should be returned. The
number of records returned will never exceed the
server's own limit, defined by the max_get_records_size parameter
in the server configuration. The default value
is '100'.
TABLE_NAMES
: Comma-separated list of tables to
restrict the results to. Use '*' to show all
tables.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ShowResourceStatisticsResponse showResourceStatistics(ShowResourceStatisticsRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public ShowResourceStatisticsResponse showResourceStatistics(Map<String,String> options) throws GPUdbException
options
- Optional parameters. The default value is an empty
Map
.Response
object
containing the results of the operation.GPUdbException
- if an error occurs during the operation.public ShowResourceGroupsResponse showResourceGroups(ShowResourceGroupsRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ShowResourceGroupsResponse showResourceGroups(List<String> names, Map<String,String> options) throws GPUdbException
names
- List of names of groups to be shown. A single entry with
an empty string returns all groups.options
- Optional parameters.
SHOW_DEFAULT_VALUES
: If TRUE
include values of fields that are based on
the default resource group.
Supported values:
The default value is TRUE
.
SHOW_DEFAULT_GROUP
: If TRUE
include the default and system resource
groups in the response. This value defaults to
false if an explicit list of group names is
provided, and true otherwise.
Supported values:
The default value is TRUE
.
SHOW_TIER_USAGE
: If TRUE
include the resource group usage on the
worker ranks in the response.
Supported values:
The default value is FALSE
.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSchemaResponse showSchema(ShowSchemaRequest request) throws GPUdbException
schemaName
.request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSchemaResponse showSchema(String schemaName, Map<String,String> options) throws GPUdbException
schemaName
.schemaName
- Name of the schema for which to retrieve the
information. If blank, then info for all schemas is
returned.options
- Optional parameters.
NO_ERROR_IF_NOT_EXISTS
: If FALSE
will return an error if the provided
schemaName
does not exist. If TRUE
then it will return an empty result if the
provided schemaName
does not exist.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSecurityResponse showSecurity(ShowSecurityRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSecurityResponse showSecurity(List<String> names, Map<String,String> options) throws GPUdbException
names
- A list of names of users and/or roles about which security
information is requested. If none are provided,
information about all users and roles will be returned.options
- Optional parameters.
SHOW_CURRENT_USER
: If TRUE
, returns only security information for the
current user.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSqlProcResponse showSqlProc(ShowSqlProcRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSqlProcResponse showSqlProc(String procedureName, Map<String,String> options) throws GPUdbException
procedureName
- Name of the procedure for which to retrieve the
information. If blank, then information about all
procedures is returned. The default value is ''.options
- Optional parameters.
NO_ERROR_IF_NOT_EXISTS
: If TRUE
, no error will be returned if the
requested procedure does not exist. If FALSE
, an error will be returned if the
requested procedure does not exist.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowStatisticsResponse showStatistics(ShowStatisticsRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowStatisticsResponse showStatistics(List<String> tableNames, Map<String,String> options) throws GPUdbException
tableNames
- Names of tables whose metadata will be fetched, each
in [schema_name.]table_name format, using standard name resolution rules. All
provided tables must exist, or an error is returned.options
- Optional parameters.
NO_ERROR_IF_NOT_EXISTS
: If TRUE
and if the table names specified in tableNames
does not exist, no error is
returned. If FALSE
and if the table names specified in
tableNames
does not exist, then an error
is returned.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSystemPropertiesResponse showSystemProperties(ShowSystemPropertiesRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSystemPropertiesResponse showSystemProperties(Map<String,String> options) throws GPUdbException
options
- Optional parameters.
PROPERTIES
: A list of comma separated names of
properties requested. If not specified, all
properties will be returned.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSystemStatusResponse showSystemStatus(ShowSystemStatusRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSystemStatusResponse showSystemStatus(Map<String,String> options) throws GPUdbException
options
- Optional parameters, currently unused. The default value
is an empty Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSystemTimingResponse showSystemTiming(ShowSystemTimingRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowSystemTimingResponse showSystemTiming(Map<String,String> options) throws GPUdbException
options
- Optional parameters, currently unused. The default value
is an empty Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowTableResponse showTable(ShowTableRequest request) throws GPUdbException
tableName
.
If the supplied tableName
is a
schema the call can return information about either the schema itself or
the tables and views it contains. If tableName
is empty,
information about all schemas will be returned.
If the option GET_SIZES
is set
to TRUE
, then
the number of records in each table is returned (in sizes
and fullSizes
), along
with the total number of objects across all requested tables (in totalSize
and
totalFullSize
).
For a schema, setting the SHOW_CHILDREN
option to FALSE
returns only information about the schema itself; setting SHOW_CHILDREN
to TRUE
returns
a list of tables and views contained in the schema, along with their
corresponding detail.
To retrieve a list of every table, view, and schema in the database, set
tableName
to
'*' and SHOW_CHILDREN
to TRUE
. When doing
this, the returned totalSize
and
totalFullSize
will not include the sizes of non-base tables (e.g.,
filters, views, joins, etc.).
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowTableResponse showTable(String tableName, Map<String,String> options) throws GPUdbException
tableName
. If the supplied tableName
is a schema the
call can return information about either the schema itself or the tables
and views it contains. If tableName
is empty, information about
all schemas will be returned.
If the option GET_SIZES
is set
to TRUE
, then
the number of records in each table is returned (in sizes
and fullSizes
), along
with the total number of objects across all requested tables (in totalSize
and
totalFullSize
).
For a schema, setting the SHOW_CHILDREN
option to FALSE
returns only information about the schema itself; setting SHOW_CHILDREN
to TRUE
returns
a list of tables and views contained in the schema, along with their
corresponding detail.
To retrieve a list of every table, view, and schema in the database, set
tableName
to '*' and SHOW_CHILDREN
to TRUE
. When
doing this, the returned totalSize
and
totalFullSize
will not include the sizes of non-base tables (e.g.,
filters, views, joins, etc.).
tableName
- Name of the table for which to retrieve the
information, in [schema_name.]table_name format, using
standard name resolution rules. If blank,
then returns information about all tables and views.options
- Optional parameters.
DEPENDENCIES
: Include view dependencies in the
output.
Supported values:
The default value is FALSE
.
FORCE_SYNCHRONOUS
: If TRUE
then the table sizes will wait for read
lock before returning.
Supported values:
The default value is TRUE
.
GET_CACHED_SIZES
: If TRUE
then the number of records in each table,
along with a cumulative count, will be returned;
blank, otherwise. This version will return the
sizes cached at rank 0, which may be stale if
there is a multihead insert occuring.
Supported values:
The default value is FALSE
.
GET_SIZES
: If TRUE
then the number of records in each table,
along with a cumulative count, will be returned;
blank, otherwise.
Supported values:
The default value is FALSE
.
NO_ERROR_IF_NOT_EXISTS
: If FALSE
will return an error if the provided
tableName
does not exist. If TRUE
then it will return an empty result.
Supported values:
The default value is FALSE
.
SHOW_CHILDREN
: If tableName
is a
schema, then TRUE
will return information about the tables
and views in the schema, and FALSE
will return information about the schema
itself. If tableName
is a table or view,
SHOW_CHILDREN
must be FALSE
. If tableName
is empty, then
SHOW_CHILDREN
must be TRUE
.
Supported values:
The default value is TRUE
.
GET_COLUMN_INFO
: If TRUE
then column info (memory usage, etc) will
be returned.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowTableMetadataResponse showTableMetadata(ShowTableMetadataRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowTableMetadataResponse showTableMetadata(List<String> tableNames, Map<String,String> options) throws GPUdbException
tableNames
- Names of tables whose metadata will be fetched, in
[schema_name.]table_name format, using standard name resolution rules. All
provided tables must exist, or an error is returned.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowTableMonitorsResponse showTableMonitors(ShowTableMonitorsRequest request) throws GPUdbException
createTableMonitor
.
Returns detailed information about existing table monitors.request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowTableMonitorsResponse showTableMonitors(List<String> monitorIds, Map<String,String> options) throws GPUdbException
createTableMonitor
.
Returns detailed information about existing table monitors.monitorIds
- List of monitors to be shown. An empty list or a
single entry with an empty string returns all table
monitors.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowTablesByTypeResponse showTablesByType(ShowTablesByTypeRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowTablesByTypeResponse showTablesByType(String typeId, String label, Map<String,String> options) throws GPUdbException
typeId
- Type id returned by a call to createType
.label
- Optional user supplied label which can be used instead of
the type_id to retrieve all tables with the given label.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowTriggersResponse showTriggers(ShowTriggersRequest request) throws GPUdbException
request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowTriggersResponse showTriggers(List<String> triggerIds, Map<String,String> options) throws GPUdbException
triggerIds
- List of IDs of the triggers whose information is to
be retrieved. An empty list means information will be
retrieved on all active triggers.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public ShowTypesResponse showTypes(ShowTypesRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowTypesResponse showTypes(String typeId, String label, Map<String,String> options) throws GPUdbException
typeId
- Type Id returned in response to a call to createType
.label
- Option string that was supplied by user in a call to
createType
.options
- Optional parameters.
NO_JOIN_TYPES
: When set to 'true', no join
types will be included.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowVideoResponse showVideo(ShowVideoRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowVideoResponse showVideo(List<String> paths, Map<String,String> options) throws GPUdbException
paths
- The fully-qualified KiFS paths for the videos to show. If
empty, shows all videos.options
- Optional parameters. The default value is an empty
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowWalResponse showWal(ShowWalRequest request) throws GPUdbException
request
- Request
object containing the
parameters for the operation.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public ShowWalResponse showWal(List<String> tableNames, Map<String,String> options) throws GPUdbException
tableNames
- List of tables to query. An asterisk returns all
tables.options
- Optional parameters.
SHOW_SETTINGS
: If TRUE
include a map of the wal settings for the
requested tables.
Supported values:
The default value is TRUE
.
Map
.Response
object containing the results
of the operation.GPUdbException
- if an error occurs during the operation.public SolveGraphResponse solveGraph(SolveGraphRequest request) throws GPUdbException
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /solve/graph examples before using this endpoint.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public SolveGraphResponse solveGraph(String graphName, List<String> weightsOnEdges, List<String> restrictions, String solverType, List<String> sourceNodes, List<String> destinationNodes, String solutionTable, Map<String,String> options) throws GPUdbException
IMPORTANT: It's highly recommended that you review the Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /solve/graph examples before using this endpoint.
graphName
- Name of the graph resource to solve.weightsOnEdges
- Additional weights to apply to the edges of an
existing graph. Weights must be specified using
identifiers; identifiers are
grouped as combinations. Identifiers can
be used with existing column names, e.g.,
'table.column AS WEIGHTS_EDGE_ID', expressions,
e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED',
or constant values, e.g., '{4, 15, 2} AS
WEIGHTS_VALUESPECIFIED'. Any provided weights
will be added (in the case of
'WEIGHTS_VALUESPECIFIED') to or multiplied with
(in the case of 'WEIGHTS_FACTORSPECIFIED') the
existing weight(s). If using constant values in
an identifier combination, the number of values
specified must match across the combination. The
default value is an empty List
.restrictions
- Additional restrictions to apply to the nodes/edges
of an existing graph. Restrictions must be
specified using identifiers; identifiers are
grouped as combinations. Identifiers can be
used with existing column names, e.g.,
'table.column AS RESTRICTIONS_EDGE_ID',
expressions, e.g., 'column/2 AS
RESTRICTIONS_VALUECOMPARED', or constant values,
e.g., '{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED'.
If using constant values in an identifier
combination, the number of values specified must
match across the combination. If
remove_previous_restrictions option is set to true,
any provided restrictions will replace the existing
restrictions. Otherwise, any provided restrictions
will be added (in the case of
'RESTRICTIONS_VALUECOMPARED') to or replaced (in
the case of 'RESTRICTIONS_ONOFFCOMPARED'). The
default value is an empty List
.solverType
- The type of solver to use for the graph.
Supported values:
SHORTEST_PATH
: Solves for the optimal
(shortest) path based on weights and
restrictions from one source to destinations
nodes. Also known as the Dijkstra solver.
PAGE_RANK
: Solves for the probability of
each destination node being visited based on
the links of the graph topology. Weights are
not required to use this solver.
PROBABILITY_RANK
: Solves for the
transitional probability (Hidden Markov) for
each node based on the weights (probability
assigned over given edges).
CENTRALITY
: Solves for the degree of a node
to depict how many pairs of individuals that
would have to go through the node to reach
one another in the minimum number of hops.
Also known as betweenness.
MULTIPLE_ROUTING
: Solves for finding the
minimum cost cumulative path for a round-trip
starting from the given source and visiting
each given destination node once then
returning to the source. Also known as the
travelling salesman problem.
INVERSE_SHORTEST_PATH
: Solves for finding
the optimal path cost for each destination
node to route to the source node. Also known
as inverse Dijkstra or the service man
routing problem.
BACKHAUL_ROUTING
: Solves for optimal routes
that connect remote asset nodes to the fixed
(backbone) asset nodes.
ALLPATHS
: Solves for paths that would give
costs between max and min solution radia -
Make sure to limit by the
'max_solution_targets' option. Min cost
shoudl be >= shortest_path cost.
STATS_ALL
: Solves for graph statistics such
as graph diameter, longest pairs, vertex
valences, topology numbers, average and max
cluster sizes, etc.
CLOSENESS
: Solves for the centrality
closeness score per node as the sum of the
inverse shortest path costs to all nodes in
the graph.
SHORTEST_PATH
.sourceNodes
- It can be one of the nodal identifiers - e.g:
'NODE_WKTPOINT' for source nodes. For BACKHAUL_ROUTING
, this list depicts the fixed
assets. The default value is an empty List
.destinationNodes
- It can be one of the nodal identifiers - e.g:
'NODE_WKTPOINT' for destination (target) nodes.
For BACKHAUL_ROUTING
, this list depicts the remote
assets. The default value is an empty List
.solutionTable
- Name of the table to store the solution, in
[schema_name.]table_name format, using standard name resolution rules. The
default value is 'graph_solutions'.options
- Additional parameters.
MAX_SOLUTION_RADIUS
: For ALLPATHS
, SHORTEST_PATH
and INVERSE_SHORTEST_PATH
solvers only. Sets the
maximum solution cost radius, which ignores the
destinationNodes
list and instead
outputs the nodes within the radius sorted by
ascending cost. If set to '0.0', the setting is
ignored. The default value is '0.0'.
MIN_SOLUTION_RADIUS
: For ALLPATHS
, SHORTEST_PATH
and INVERSE_SHORTEST_PATH
solvers only. Applicable
only when MAX_SOLUTION_RADIUS
is set. Sets the minimum
solution cost radius, which ignores the destinationNodes
list and instead outputs the
nodes within the radius sorted by ascending
cost. If set to '0.0', the setting is ignored.
The default value is '0.0'.
MAX_SOLUTION_TARGETS
: For ALLPATHS
, SHORTEST_PATH
and INVERSE_SHORTEST_PATH
solvers only. Sets the
maximum number of solution targets, which
ignores the destinationNodes
list and
instead outputs no more than n number of nodes
sorted by ascending cost where n is equal to the
setting value. If set to 0, the setting is
ignored. The default value is '1000'.
UNIFORM_WEIGHTS
: When specified, assigns the
given value to all the edges in the graph. Note
that weights provided in weightsOnEdges
will override this value.
LEFT_TURN_PENALTY
: This will add an additonal
weight over the edges labelled as 'left turn' if
the 'add_turn' option parameter of the createGraph
was invoked at
graph creation. The default value is '0.0'.
RIGHT_TURN_PENALTY
: This will add an additonal
weight over the edges labelled as' right turn'
if the 'add_turn' option parameter of the createGraph
was invoked at
graph creation. The default value is '0.0'.
INTERSECTION_PENALTY
: This will add an
additonal weight over the edges labelled as
'intersection' if the 'add_turn' option
parameter of the createGraph
was invoked at
graph creation. The default value is '0.0'.
SHARP_TURN_PENALTY
: This will add an additonal
weight over the edges labelled as 'sharp turn'
or 'u-turn' if the 'add_turn' option parameter
of the createGraph
was
invoked at graph creation. The default value is
'0.0'.
NUM_BEST_PATHS
: For MULTIPLE_ROUTING
solvers only; sets the number
of shortest paths computed from each node. This
is the heuristic criterion. Default value of
zero allows the number to be computed
automatically by the solver. The user may want
to override this parameter to speed-up the
solver. The default value is '0'.
MAX_NUM_COMBINATIONS
: For MULTIPLE_ROUTING
solvers only; sets the cap on
the combinatorial sequences generated. If the
default value of two millions is overridden to a
lesser value, it can potentially speed up the
solver. The default value is '2000000'.
OUTPUT_EDGE_PATH
: If true then concatenated
edge ids will be added as the EDGE path column
of the solution table for each source and target
pair in shortest path solves.
Supported values:
The default value is FALSE
.
OUTPUT_WKT_PATH
: If true then concatenated wkt
line segments will be added as the Wktroute
column of the solution table for each source and
target pair in shortest path solves.
Supported values:
The default value is TRUE
.
SERVER_ID
: Indicates which graph server(s) to
send the request to. Default is to send to the
server, amongst those containing the
corresponding graph, that has the most
computational bandwidth. For SHORTEST_PATH
solver type, the input is split amongst the
server containing the corresponding graph.
CONVERGENCE_LIMIT
: For PAGE_RANK
solvers only; Maximum percent
relative threshold on the pagerank scores of
each node between consecutive iterations to
satisfy convergence. Default value is 1 (one)
percent. The default value is '1.0'.
MAX_ITERATIONS
: For PAGE_RANK
solvers only; Maximum number of
pagerank iterations for satisfying convergence.
Default value is 100. The default value is
'100'.
MAX_RUNS
: For all CENTRALITY
solvers only; Sets the maximum
number of shortest path runs; maximum possible
value is the number of nodes in the graph.
Default value of 0 enables this value to be auto
computed by the solver. The default value is
'0'.
OUTPUT_CLUSTERS
: For STATS_ALL
solvers only; the cluster index for
each node will be inserted as an additional
column in the output.
Supported values:
TRUE
: An additional column 'CLUSTER'
will be added for each node
FALSE
: No extra cluster info per node
will be available in the output
FALSE
.
SOLVE_HEURISTIC
: Specify heuristic search
criterion only for the geo graphs and shortest
path solves towards a single target.
Supported values:
ASTAR
: Employs A-STAR heuristics to
speed up the shortest path traversal
NONE
: No heuristics are applied
NONE
.
ASTAR_RADIUS
: For path solvers only when
'solve_heuristic' option is 'astar'. The
shortest path traversal front includes nodes
only within this radius (kilometers) as it moves
towards the target location. The default value
is '70'.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public UpdateRecordsResponse updateRecordsRaw(RawUpdateRecordsRequest request) throws GPUdbException
newValuesMaps
. There is also an optional 'upsert' capability where if
a particular predicate doesn't match any existing record, then a new
record can be inserted.
Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure
primary key' predicates are allowed when updating primary key values. If
the primary key for a table is the column 'attr1', then the operation
will only accept predicates of the form: "attr1 == 'foo'" if the attr1
column is being updated. For a composite primary key (e.g. columns
'attr1' and 'attr2') then this operation will only accept predicates of
the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary
key columns must appear in an equality predicate in the expressions.
Furthermore each 'pure primary key' predicate must be unique within a
given request. These restrictions can be removed by utilizing some
available options through options
.
The UPDATE_ON_EXISTING_PK
option specifies the record primary key collision
policy for tables with a primary key, while IGNORE_EXISTING_PK
specifies the record primary key collision
error-suppression policy when those collisions result in the update
being rejected. Both are ignored on tables with no primary key.
request
- Request
object
containing the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public <TRequest> UpdateRecordsResponse updateRecords(UpdateRecordsRequest<TRequest> request) throws GPUdbException
newValuesMaps
. There is also an optional 'upsert' capability where if
a particular predicate doesn't match any existing record, then a new
record can be inserted.
Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure
primary key' predicates are allowed when updating primary key values. If
the primary key for a table is the column 'attr1', then the operation
will only accept predicates of the form: "attr1 == 'foo'" if the attr1
column is being updated. For a composite primary key (e.g. columns
'attr1' and 'attr2') then this operation will only accept predicates of
the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary
key columns must appear in an equality predicate in the expressions.
Furthermore each 'pure primary key' predicate must be unique within a
given request. These restrictions can be removed by utilizing some
available options through options
.
The UPDATE_ON_EXISTING_PK
option specifies the record primary key collision
policy for tables with a primary key, while IGNORE_EXISTING_PK
specifies the record primary key collision
error-suppression policy when those collisions result in the update
being rejected. Both are ignored on tables with no primary key.
TRequest
- The type of object being added.request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public <TRequest> UpdateRecordsResponse updateRecords(TypeObjectMap<TRequest> typeObjectMap, UpdateRecordsRequest<TRequest> request) throws GPUdbException
newValuesMaps
. There is also an optional 'upsert' capability where if
a particular predicate doesn't match any existing record, then a new
record can be inserted.
Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure
primary key' predicates are allowed when updating primary key values. If
the primary key for a table is the column 'attr1', then the operation
will only accept predicates of the form: "attr1 == 'foo'" if the attr1
column is being updated. For a composite primary key (e.g. columns
'attr1' and 'attr2') then this operation will only accept predicates of
the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary
key columns must appear in an equality predicate in the expressions.
Furthermore each 'pure primary key' predicate must be unique within a
given request. These restrictions can be removed by utilizing some
available options through options
.
The UPDATE_ON_EXISTING_PK
option specifies the record primary key collision
policy for tables with a primary key, while IGNORE_EXISTING_PK
specifies the record primary key collision
error-suppression policy when those collisions result in the update
being rejected. Both are ignored on tables with no primary key.
TRequest
- The type of object being added.typeObjectMap
- Type object map used for encoding input objects.request
- Request
object containing
the parameters for the operation.Response
object containing the
results of the operation.IllegalArgumentException
- if typeObjectMap
is not an
instance of one of the following:
Type
, TypeObjectMap
,
Schema
, or a
Class
that implements IndexedRecord
GPUdbException
- if an error occurs during the operation.public <TRequest> UpdateRecordsResponse updateRecords(String tableName, List<String> expressions, List<Map<String,String>> newValuesMaps, List<TRequest> data, Map<String,String> options) throws GPUdbException
newValuesMaps
. There is also an optional
'upsert' capability where if a particular predicate doesn't match any
existing record, then a new record can be inserted.
Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure
primary key' predicates are allowed when updating primary key values. If
the primary key for a table is the column 'attr1', then the operation
will only accept predicates of the form: "attr1 == 'foo'" if the attr1
column is being updated. For a composite primary key (e.g. columns
'attr1' and 'attr2') then this operation will only accept predicates of
the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary
key columns must appear in an equality predicate in the expressions.
Furthermore each 'pure primary key' predicate must be unique within a
given request. These restrictions can be removed by utilizing some
available options through options
.
The UPDATE_ON_EXISTING_PK
option specifies the record primary key collision
policy for tables with a primary key, while IGNORE_EXISTING_PK
specifies the record primary key collision
error-suppression policy when those collisions result in the update
being rejected. Both are ignored on tables with no primary key.
TRequest
- The type of object being added.tableName
- Name of table to be updated, in
[schema_name.]table_name format, using standard name resolution rules. Must be a
currently existing table and not a view.expressions
- A list of the actual predicates, one for each
update; format should follow the guidelines here
.newValuesMaps
- List of new values for the matching records. Each
element is a map with (key, value) pairs where the
keys are the names of the columns whose values are
to be updated; the values are the new values. The
number of elements in the list should match the
length of expressions
.data
- An *optional* list of new binary-avro encoded records to
insert, one for each update. If one of expressions
does not yield a matching record to be updated, then the
corresponding element from this list will be added to the
table. The default value is an empty List
.options
- Optional parameters.
GLOBAL_EXPRESSION
: An optional global
expression to reduce the search space of the
predicates listed in expressions
. The
default value is ''.
BYPASS_SAFETY_CHECKS
: When set to TRUE
, all predicates are available for primary
key updates. Keep in mind that it is possible
to destroy data in this case, since a single
predicate may match multiple objects
(potentially all of records of a table), and
then updating all of those records to have the
same primary key will, due to the primary key
uniqueness constraints, effectively delete all
but one of those updated records.
Supported values:
The default value is FALSE
.
UPDATE_ON_EXISTING_PK
: Specifies the record
collision policy for updating a table with a primary key. There are two
ways that a record collision can occur. The
first is an "update collision", which happens
when the update changes the value of the updated
record's primary key, and that new primary key
already exists as the primary key of another
record in the table. The second is an "insert
collision", which occurs when a given filter in
expressions
finds no records to update,
and the alternate insert record given in data
(or recordsToInsertStr
) contains a
primary key matching that of an existing record
in the table. If UPDATE_ON_EXISTING_PK
is set to TRUE
, "update collisions" will result in the
existing record collided into being removed and
the record updated with values specified in
newValuesMaps
taking its place; "insert
collisions" will result in the collided-into
record being updated with the values in data
/recordsToInsertStr
(if given). If
set to FALSE
, the existing collided-into record will
remain unchanged, while the update will be
rejected and the error handled as determined by
IGNORE_EXISTING_PK
. If the specified table
does not have a primary key, then this option
has no effect.
Supported values:
TRUE
: Overwrite the collided-into
record when updating a record's primary
key or inserting an alternate record
causes a primary key collision between
the record being updated/inserted and
another existing record in the table
FALSE
: Reject updates which cause
primary key collisions between the
record being updated/inserted and an
existing record in the table
FALSE
.
IGNORE_EXISTING_PK
: Specifies the record
collision error-suppression policy for updating
a table with a primary key, only used when
primary key record collisions are rejected
(UPDATE_ON_EXISTING_PK
is FALSE
). If set to TRUE
, any record update that is rejected for
resulting in a primary key collision with an
existing table record will be ignored with no
error generated. If FALSE
, the rejection of any update for
resulting in a primary key collision will cause
an error to be reported. If the specified table
does not have a primary key or if UPDATE_ON_EXISTING_PK
is TRUE
, then this option has no effect.
Supported values:
TRUE
: Ignore updates that result in
primary key collisions with existing
records
FALSE
: Treat as errors any updates that
result in primary key collisions with
existing records
FALSE
.
UPDATE_PARTITION
: Force qualifying records to
be deleted and reinserted so their partition
membership will be reevaluated.
Supported values:
The default value is FALSE
.
TRUNCATE_STRINGS
: If set to TRUE
, any strings which are too long for their
charN string fields will be truncated to fit.
Supported values:
The default value is FALSE
.
USE_EXPRESSIONS_IN_NEW_VALUES_MAPS
: When set to
TRUE
, all new values in newValuesMaps
are considered as expression values. When set to
FALSE
, all new values in newValuesMaps
are considered as constants. NOTE: When TRUE
, string constants will need to be quoted
to avoid being evaluated as expressions.
Supported values:
The default value is FALSE
.
RECORD_ID
: ID of a single record to be updated
(returned in the call to insertRecords
or getRecordsFromCollection
).
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public <TRequest> UpdateRecordsResponse updateRecords(TypeObjectMap<TRequest> typeObjectMap, String tableName, List<String> expressions, List<Map<String,String>> newValuesMaps, List<TRequest> data, Map<String,String> options) throws GPUdbException
newValuesMaps
. There is also an optional
'upsert' capability where if a particular predicate doesn't match any
existing record, then a new record can be inserted.
Note that this operation can only be run on an original table and not on a result view.
This operation can update primary key values. By default only 'pure
primary key' predicates are allowed when updating primary key values. If
the primary key for a table is the column 'attr1', then the operation
will only accept predicates of the form: "attr1 == 'foo'" if the attr1
column is being updated. For a composite primary key (e.g. columns
'attr1' and 'attr2') then this operation will only accept predicates of
the form: "(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary
key columns must appear in an equality predicate in the expressions.
Furthermore each 'pure primary key' predicate must be unique within a
given request. These restrictions can be removed by utilizing some
available options through options
.
The UPDATE_ON_EXISTING_PK
option specifies the record primary key collision
policy for tables with a primary key, while IGNORE_EXISTING_PK
specifies the record primary key collision
error-suppression policy when those collisions result in the update
being rejected. Both are ignored on tables with no primary key.
TRequest
- The type of object being added.typeObjectMap
- Type object map used for encoding input objects.tableName
- Name of table to be updated, in
[schema_name.]table_name format, using standard name resolution rules. Must be a
currently existing table and not a view.expressions
- A list of the actual predicates, one for each
update; format should follow the guidelines here
.newValuesMaps
- List of new values for the matching records. Each
element is a map with (key, value) pairs where the
keys are the names of the columns whose values are
to be updated; the values are the new values. The
number of elements in the list should match the
length of expressions
.data
- An *optional* list of new binary-avro encoded records to
insert, one for each update. If one of expressions
does not yield a matching record to be updated, then the
corresponding element from this list will be added to the
table. The default value is an empty List
.options
- Optional parameters.
GLOBAL_EXPRESSION
: An optional global
expression to reduce the search space of the
predicates listed in expressions
. The
default value is ''.
BYPASS_SAFETY_CHECKS
: When set to TRUE
, all predicates are available for primary
key updates. Keep in mind that it is possible
to destroy data in this case, since a single
predicate may match multiple objects
(potentially all of records of a table), and
then updating all of those records to have the
same primary key will, due to the primary key
uniqueness constraints, effectively delete all
but one of those updated records.
Supported values:
The default value is FALSE
.
UPDATE_ON_EXISTING_PK
: Specifies the record
collision policy for updating a table with a primary key. There are two
ways that a record collision can occur. The
first is an "update collision", which happens
when the update changes the value of the updated
record's primary key, and that new primary key
already exists as the primary key of another
record in the table. The second is an "insert
collision", which occurs when a given filter in
expressions
finds no records to update,
and the alternate insert record given in data
(or recordsToInsertStr
) contains a
primary key matching that of an existing record
in the table. If UPDATE_ON_EXISTING_PK
is set to TRUE
, "update collisions" will result in the
existing record collided into being removed and
the record updated with values specified in
newValuesMaps
taking its place; "insert
collisions" will result in the collided-into
record being updated with the values in data
/recordsToInsertStr
(if given). If
set to FALSE
, the existing collided-into record will
remain unchanged, while the update will be
rejected and the error handled as determined by
IGNORE_EXISTING_PK
. If the specified table
does not have a primary key, then this option
has no effect.
Supported values:
TRUE
: Overwrite the collided-into
record when updating a record's primary
key or inserting an alternate record
causes a primary key collision between
the record being updated/inserted and
another existing record in the table
FALSE
: Reject updates which cause
primary key collisions between the
record being updated/inserted and an
existing record in the table
FALSE
.
IGNORE_EXISTING_PK
: Specifies the record
collision error-suppression policy for updating
a table with a primary key, only used when
primary key record collisions are rejected
(UPDATE_ON_EXISTING_PK
is FALSE
). If set to TRUE
, any record update that is rejected for
resulting in a primary key collision with an
existing table record will be ignored with no
error generated. If FALSE
, the rejection of any update for
resulting in a primary key collision will cause
an error to be reported. If the specified table
does not have a primary key or if UPDATE_ON_EXISTING_PK
is TRUE
, then this option has no effect.
Supported values:
TRUE
: Ignore updates that result in
primary key collisions with existing
records
FALSE
: Treat as errors any updates that
result in primary key collisions with
existing records
FALSE
.
UPDATE_PARTITION
: Force qualifying records to
be deleted and reinserted so their partition
membership will be reevaluated.
Supported values:
The default value is FALSE
.
TRUNCATE_STRINGS
: If set to TRUE
, any strings which are too long for their
charN string fields will be truncated to fit.
Supported values:
The default value is FALSE
.
USE_EXPRESSIONS_IN_NEW_VALUES_MAPS
: When set to
TRUE
, all new values in newValuesMaps
are considered as expression values. When set to
FALSE
, all new values in newValuesMaps
are considered as constants. NOTE: When TRUE
, string constants will need to be quoted
to avoid being evaluated as expressions.
Supported values:
The default value is FALSE
.
RECORD_ID
: ID of a single record to be updated
(returned in the call to insertRecords
or getRecordsFromCollection
).
Map
.Response
object containing the
results of the operation.IllegalArgumentException
- if typeObjectMap
is not an
instance of one of the following:
Type
, TypeObjectMap
,
Schema
, or a
Class
that implements IndexedRecord
GPUdbException
- if an error occurs during the operation.public UpdateRecordsBySeriesResponse updateRecordsBySeries(UpdateRecordsBySeriesRequest request) throws GPUdbException
tableName
to include full series (track) information from the worldTableName
for the series (tracks) present in the viewName
.request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public UpdateRecordsBySeriesResponse updateRecordsBySeries(String tableName, String worldTableName, String viewName, List<String> reserved, Map<String,String> options) throws GPUdbException
tableName
to include full series
(track) information from the worldTableName
for the series
(tracks) present in the viewName
.tableName
- Name of the view on which the update operation will be
performed, in [schema_name.]view_name format, using
standard name resolution rules. Must be an
existing view.worldTableName
- Name of the table containing the complete series
(track) information, in [schema_name.]table_name
format, using standard name resolution rules.viewName
- Name of the view containing the series (tracks) which
have to be updated, in [schema_name.]view_name format,
using standard name resolution rules. The default
value is ''.reserved
- The default value is an empty List
.options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public UploadFilesResponse uploadFiles(UploadFilesRequest request) throws GPUdbException
To upload files in their entirety, populate fileNames
with the
file names to upload into on KiFS, and their respective byte content in
fileData
.
Multiple steps are involved when uploading in multiple parts. Only one file at a time can be uploaded in this manner. A user-provided UUID is utilized to tie all the upload steps together for a given file. To upload a file in multiple parts:
1. Provide the file name in fileNames
, the
UUID in the MULTIPART_UPLOAD_UUID
key in options
, and a
MULTIPART_OPERATION
value of INIT
.
2. Upload one or more parts by providing the file name, the part data in
fileData
,
the UUID, a MULTIPART_OPERATION
value of UPLOAD_PART
,
and the part number in the MULTIPART_UPLOAD_PART_NUMBER
. The part numbers must start at 1 and
increase incrementally. Parts may not be uploaded out of order.
3. Complete the upload by providing the file name, the UUID, and a
MULTIPART_OPERATION
value of COMPLETE
.
Multipart uploads in progress may be canceled by providing the file
name, the UUID, and a MULTIPART_OPERATION
value of CANCEL
. If an new
upload is initialized with a different UUID for an existing upload in
progress, the pre-existing upload is automatically canceled in favor of
the new upload.
The multipart upload must be completed for the file to be usable in
KiFS. Information about multipart uploads in progress is available in
showFiles
.
File data may be pre-encoded using base64 encoding. This should be
indicated using the FILE_ENCODING
option, and is recommended when using JSON serialization.
Each file path must reside in a top-level KiFS directory, i.e. one
of the directories listed in showDirectories
. The user
must have write permission on the directory. Nested directories are
permitted in file name paths. Directories are deliniated with the
directory separator of '/'. For example, given the file path
'/a/b/c/d.txt', 'a' must be a KiFS directory.
These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.
request
- Request
object containing the
parameters for the operation.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public UploadFilesResponse uploadFiles(List<String> fileNames, List<ByteBuffer> fileData, Map<String,String> options) throws GPUdbException
To upload files in their entirety, populate fileNames
with the
file names to upload into on KiFS, and their respective byte content in
fileData
.
Multiple steps are involved when uploading in multiple parts. Only one file at a time can be uploaded in this manner. A user-provided UUID is utilized to tie all the upload steps together for a given file. To upload a file in multiple parts:
1. Provide the file name in fileNames
, the UUID in the MULTIPART_UPLOAD_UUID
key in options
, and a MULTIPART_OPERATION
value of INIT
.
2. Upload one or more parts by providing the file name, the part data in
fileData
, the UUID, a MULTIPART_OPERATION
value of UPLOAD_PART
,
and the part number in the MULTIPART_UPLOAD_PART_NUMBER
. The part numbers must start at 1 and
increase incrementally. Parts may not be uploaded out of order.
3. Complete the upload by providing the file name, the UUID, and a
MULTIPART_OPERATION
value of COMPLETE
.
Multipart uploads in progress may be canceled by providing the file
name, the UUID, and a MULTIPART_OPERATION
value of CANCEL
. If an new
upload is initialized with a different UUID for an existing upload in
progress, the pre-existing upload is automatically canceled in favor of
the new upload.
The multipart upload must be completed for the file to be usable in
KiFS. Information about multipart uploads in progress is available in
showFiles
.
File data may be pre-encoded using base64 encoding. This should be
indicated using the FILE_ENCODING
option, and is recommended when using JSON serialization.
Each file path must reside in a top-level KiFS directory, i.e. one
of the directories listed in showDirectories
. The user must have write permission on the directory.
Nested directories are permitted in file name paths. Directories are
deliniated with the directory separator of '/'. For example, given the
file path '/a/b/c/d.txt', 'a' must be a KiFS directory.
These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.
fileNames
- An array of full file name paths to be used for the
files uploaded to KiFS. File names may have any number
of nested directories in their paths, but the
top-level directory must be an existing KiFS
directory. Each file must reside in or under a
top-level directory. A full file name path cannot be
larger than 1024 characters.fileData
- File data for the files being uploaded, for the
respective files in fileNames
.options
- Optional parameters.
FILE_ENCODING
: Encoding that has been applied
to the uploaded file data. When using JSON
serialization it is recommended to utilize
BASE64
. The caller is responsible for encoding
the data provided in this payload.
Supported values:
BASE64
: Specifies that the file data
being uploaded has been base64 encoded.
NONE
: The uploaded file data has not
been encoded.
NONE
.
MULTIPART_OPERATION
: Multipart upload operation
to perform.
Supported values:
NONE
: Default, indicates this is not a
multipart upload
INIT
: Initialize a multipart file
upload
UPLOAD_PART
: Uploads a part of the
specified multipart file upload
COMPLETE
: Complete the specified
multipart file upload
CANCEL
: Cancel the specified multipart
file upload
NONE
.
MULTIPART_UPLOAD_UUID
: UUID to uniquely
identify a multipart upload
MULTIPART_UPLOAD_PART_NUMBER
: Incremental part
number for each part in a multipart upload. Part
numbers start at 1, increment by 1, and must be
uploaded sequentially
DELETE_IF_EXISTS
: If TRUE
, any existing files specified in fileNames
will be deleted prior to start of
upload. Otherwise the file is replaced once
the upload completes. Rollback of the original
file is no longer possible if the upload is
cancelled, aborted or fails if the file was
deleted beforehand.
Supported values:
The default value is FALSE
.
Map
.Response
object containing the
results of the operation.GPUdbException
- if an error occurs during the operation.public UploadFilesFromurlResponse uploadFilesFromurl(UploadFilesFromurlRequest request) throws GPUdbException
Each file path must reside in a top-level KiFS directory, i.e. one
of the directories listed in showDirectories
. The user
must have write permission on the directory. Nested directories are
permitted in file name paths. Directories are deliniated with the
directory separator of '/'. For example, given the file path
'/a/b/c/d.txt', 'a' must be a KiFS directory.
These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public UploadFilesFromurlResponse uploadFilesFromurl(List<String> fileNames, List<String> urls, Map<String,String> options) throws GPUdbException
Each file path must reside in a top-level KiFS directory, i.e. one
of the directories listed in showDirectories
. The user must have write permission on the directory.
Nested directories are permitted in file name paths. Directories are
deliniated with the directory separator of '/'. For example, given the
file path '/a/b/c/d.txt', 'a' must be a KiFS directory.
These characters are allowed in file name paths: letters, numbers, spaces, the path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#' '='.
fileNames
- An array of full file name paths to be used for the
files uploaded to KiFS. File names may have any number
of nested directories in their paths, but the
top-level directory must be an existing KiFS
directory. Each file must reside in or under a
top-level directory. A full file name path cannot be
larger than 1024 characters.urls
- List of URLs to upload, for each respective file in fileNames
.options
- Optional parameters. The default value is an empty
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public VisualizeGetFeatureInfoResponse visualizeGetFeatureInfo(VisualizeGetFeatureInfoRequest request) throws GPUdbException
GPUdbException
public VisualizeGetFeatureInfoResponse visualizeGetFeatureInfo(List<String> tableNames, List<String> xColumnNames, List<String> yColumnNames, List<String> geometryColumnNames, List<List<String>> queryColumnNames, String projection, double minX, double maxX, double minY, double maxY, int width, int height, int x, int y, int radius, long limit, String encoding, Map<String,String> options) throws GPUdbException
GPUdbException
public VisualizeImageResponse visualizeImage(VisualizeImageRequest request) throws GPUdbException
GPUdbException
public VisualizeImageResponse visualizeImage(List<String> tableNames, List<String> worldTableNames, String xColumnName, String yColumnName, String symbolColumnName, String geometryColumnName, List<List<String>> trackIds, double minX, double maxX, double minY, double maxY, int width, int height, String projection, long bgColor, Map<String,List<String>> styleOptions, Map<String,String> options) throws GPUdbException
GPUdbException
public VisualizeImageChartResponse visualizeImageChart(VisualizeImageChartRequest request) throws GPUdbException
imageData
field.request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public VisualizeImageChartResponse visualizeImageChart(String tableName, List<String> xColumnNames, List<String> yColumnNames, double minX, double maxX, double minY, double maxY, int width, int height, String bgColor, Map<String,List<String>> styleOptions, Map<String,String> options) throws GPUdbException
imageData
field.tableName
- Name of the table containing the data to be drawn as a
chart, in [schema_name.]table_name format, using
standard name resolution rules.xColumnNames
- Names of the columns containing the data mapped to
the x axis of a chart.yColumnNames
- Names of the columns containing the data mapped to
the y axis of a chart.minX
- Lower bound for the x column values. For non-numeric x
column, each x column item is mapped to an integral value
starting from 0.maxX
- Upper bound for the x column values. For non-numeric x
column, each x column item is mapped to an integral value
starting from 0.minY
- Lower bound for the y column values. For non-numeric y
column, each y column item is mapped to an integral value
starting from 0.maxY
- Upper bound for the y column values. For non-numeric y
column, each y column item is mapped to an integral value
starting from 0.width
- Width of the generated image in pixels.height
- Height of the generated image in pixels.bgColor
- Background color of the generated image.styleOptions
- Rendering style options for a chart.
POINTCOLOR
: The color of points in the
plot represented as a hexadecimal number.
The default value is '0000FF'.
POINTSIZE
: The size of points in the plot
represented as number of pixels. The
default value is '3'.
POINTSHAPE
: The shape of points in the
plot.
Supported values:
The default value is SQUARE
.
CB_POINTCOLORS
: Point color class break
information consisting of three entries:
class-break attribute, class-break
values/ranges, and point color values. This
option overrides the pointcolor option if
both are provided. Class-break ranges are
represented in the form of "min:max".
Class-break values/ranges and point color
values are separated by cb_delimiter, e.g.
{"price", "20:30;30:40;40:50",
"0xFF0000;0x00FF00;0x0000FF"}.
CB_POINTSIZES
: Point size class break
information consisting of three entries:
class-break attribute, class-break
values/ranges, and point size values. This
option overrides the pointsize option if
both are provided. Class-break ranges are
represented in the form of "min:max".
Class-break values/ranges and point size
values are separated by cb_delimiter, e.g.
{"states", "NY;TX;CA", "3;5;7"}.
CB_POINTSHAPES
: Point shape class break
information consisting of three entries:
class-break attribute, class-break
values/ranges, and point shape names. This
option overrides the pointshape option if
both are provided. Class-break ranges are
represented in the form of "min:max".
Class-break values/ranges and point shape
names are separated by cb_delimiter, e.g.
{"states", "NY;TX;CA",
"circle;square;diamond"}.
CB_DELIMITER
: A character or string which
separates per-class values in a class-break
style option string. The default value is
';'.
X_ORDER_BY
: An expression or aggregate
expression by which non-numeric x column
values are sorted, e.g. "avg(price)
descending".
Y_ORDER_BY
: An expression or aggregate
expression by which non-numeric y column
values are sorted, e.g. "avg(price)", which
defaults to "avg(price) ascending".
SCALE_TYPE_X
: Type of x axis scale.
Supported values:
The default value is NONE
.
SCALE_TYPE_Y
: Type of y axis scale.
Supported values:
The default value is NONE
.
MIN_MAX_SCALED
: If this options is set to
"false", this endpoint expects request's
min/max values are not yet scaled. They
will be scaled according to scale_type_x or
scale_type_y for response. If this options
is set to "true", this endpoint expects
request's min/max values are already scaled
according to scale_type_x/scale_type_y.
Response's min/max values will be equal to
request's min/max values. The default value
is 'false'.
JITTER_X
: Amplitude of horizontal jitter
applied to non-numeric x column values. The
default value is '0.0'.
JITTER_Y
: Amplitude of vertical jitter
applied to non-numeric y column values. The
default value is '0.0'.
PLOT_ALL
: If this options is set to
"true", all non-numeric column values are
plotted ignoring min_x, max_x, min_y and
max_y parameters. The default value is
'false'.
options
- Optional parameters.
IMAGE_ENCODING
: Encoding to be applied to the
output image. When using JSON serialization it
is recommended to specify this as BASE64
.
Supported values:
BASE64
: Apply base64 encoding to the
output image.
NONE
: Do not apply any additional
encoding to the output image.
NONE
.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public VisualizeImageClassbreakResponse visualizeImageClassbreak(VisualizeImageClassbreakRequest request) throws GPUdbException
GPUdbException
public VisualizeImageClassbreakResponse visualizeImageClassbreak(List<String> tableNames, List<String> worldTableNames, String xColumnName, String yColumnName, String symbolColumnName, String geometryColumnName, List<List<String>> trackIds, String cbAttr, List<String> cbVals, String cbPointcolorAttr, List<String> cbPointcolorVals, String cbPointalphaAttr, List<String> cbPointalphaVals, String cbPointsizeAttr, List<String> cbPointsizeVals, String cbPointshapeAttr, List<String> cbPointshapeVals, double minX, double maxX, double minY, double maxY, int width, int height, String projection, long bgColor, Map<String,List<String>> styleOptions, Map<String,String> options, List<Integer> cbTransparencyVec) throws GPUdbException
GPUdbException
public VisualizeImageContourResponse visualizeImageContour(VisualizeImageContourRequest request) throws GPUdbException
GPUdbException
public VisualizeImageContourResponse visualizeImageContour(List<String> tableNames, String xColumnName, String yColumnName, String valueColumnName, double minX, double maxX, double minY, double maxY, int width, int height, String projection, Map<String,String> styleOptions, Map<String,String> options) throws GPUdbException
GPUdbException
public VisualizeImageHeatmapResponse visualizeImageHeatmap(VisualizeImageHeatmapRequest request) throws GPUdbException
GPUdbException
public VisualizeImageHeatmapResponse visualizeImageHeatmap(List<String> tableNames, String xColumnName, String yColumnName, String valueColumnName, String geometryColumnName, double minX, double maxX, double minY, double maxY, int width, int height, String projection, Map<String,String> styleOptions, Map<String,String> options) throws GPUdbException
GPUdbException
public VisualizeImageLabelsResponse visualizeImageLabels(VisualizeImageLabelsRequest request) throws GPUdbException
GPUdbException
public VisualizeImageLabelsResponse visualizeImageLabels(String tableName, String xColumnName, String yColumnName, String xOffset, String yOffset, String textString, String font, String textColor, String textAngle, String textScale, String drawBox, String drawLeader, String lineWidth, String lineColor, String fillColor, String leaderXColumnName, String leaderYColumnName, String filter, double minX, double maxX, double minY, double maxY, int width, int height, String projection, Map<String,String> options) throws GPUdbException
GPUdbException
public VisualizeIsochroneResponse visualizeIsochrone(VisualizeIsochroneRequest request) throws GPUdbException
request
- Request
object
containing the parameters for the operation.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.public VisualizeIsochroneResponse visualizeIsochrone(String graphName, String sourceNode, double maxSolutionRadius, List<String> weightsOnEdges, List<String> restrictions, int numLevels, boolean generateImage, String levelsTable, Map<String,String> styleOptions, Map<String,String> solveOptions, Map<String,String> contourOptions, Map<String,String> options) throws GPUdbException
graphName
- Name of the graph on which the isochrone is to be
computed.sourceNode
- Starting vertex on the underlying graph from/to which
the isochrones are created.maxSolutionRadius
- Extent of the search radius around sourceNode
. Set to '-1.0' for unrestricted
search radius. The default value is -1.0.weightsOnEdges
- Additional weights to apply to the edges of an
existing graph. Weights must be specified using
identifiers; identifiers are
grouped as combinations. Identifiers can
be used with existing column names, e.g.,
'table.column AS WEIGHTS_EDGE_ID', or
expressions, e.g., 'ST_LENGTH(wkt) AS
WEIGHTS_VALUESPECIFIED'. Any provided weights
will be added (in the case of
'WEIGHTS_VALUESPECIFIED') to or multiplied with
(in the case of 'WEIGHTS_FACTORSPECIFIED') the
existing weight(s). The default value is an empty
List
.restrictions
- Additional restrictions to apply to the nodes/edges
of an existing graph. Restrictions must be
specified using identifiers; identifiers are
grouped as combinations. Identifiers can be
used with existing column names, e.g.,
'table.column AS RESTRICTIONS_EDGE_ID', or
expressions, e.g., 'column/2 AS
RESTRICTIONS_VALUECOMPARED'. If REMOVE_PREVIOUS_RESTRICTIONS
is set to TRUE
, any provided restrictions will replace the
existing restrictions. If REMOVE_PREVIOUS_RESTRICTIONS
is set to FALSE
, any provided restrictions will be added (in
the case of 'RESTRICTIONS_VALUECOMPARED') to or
replaced (in the case of
'RESTRICTIONS_ONOFFCOMPARED'). The default value is
an empty List
.numLevels
- Number of equally-separated isochrones to compute. The
default value is 1.generateImage
- If set to TRUE
, generates a PNG image of the isochrones in
the response.
Supported values:
true
false
true
.levelsTable
- Name of the table to output the isochrones to, in
[schema_name.]table_name format, using standard name resolution rules and meeting
table naming criteria. The table
will contain levels and their corresponding WKT
geometry. If no value is provided, the table is not
generated. The default value is ''.styleOptions
- Various style related options of the isochrone
image.
LINE_SIZE
: The width of the contour lines
in pixels. The default value is '3'. The
minimum allowed value is '0'. The maximum
allowed value is '20'.
COLOR
: Color of generated isolines. All
color values must be in the format RRGGBB
or AARRGGBB (to specify the alpha value).
If alpha is specified and flooded contours
are enabled, it will be used for as the
transparency of the latter. The default
value is 'FF696969'.
BG_COLOR
: When generateImage
is
set to TRUE
, background color of the generated
image. All color values must be in the
format RRGGBB or AARRGGBB (to specify the
alpha value). The default value is
'00000000'.
TEXT_COLOR
: When ADD_LABELS
is set to TRUE
, color for the labels. All color
values must be in the format RRGGBB or
AARRGGBB (to specify the alpha value). The
default value is 'FF000000'.
COLORMAP
: Colormap for contours or fill-in
regions when applicable. All color values
must be in the format RRGGBB or AARRGGBB
(to specify the alpha value).
Supported values:
JET
ACCENT
AFMHOT
AUTUMN
BINARY
BLUES
BONE
BRBG
BRG
BUGN
BUPU
BWR
CMRMAP
COOL
COOLWARM
COPPER
CUBEHELIX
DARK2
FLAG
GIST_EARTH
GIST_GRAY
GIST_HEAT
GIST_NCAR
GIST_RAINBOW
GIST_STERN
GIST_YARG
GNBU
GNUPLOT2
GNUPLOT
GRAY
GREENS
GREYS
HOT
HSV
INFERNO
MAGMA
NIPY_SPECTRAL
OCEAN
ORANGES
ORRD
PAIRED
PASTEL1
PASTEL2
PINK
PIYG
PLASMA
PRGN
PRISM
PUBU
PUBUGN
PUOR
PURD
PURPLES
RAINBOW
RDBU
RDGY
RDPU
RDYLBU
RDYLGN
REDS
SEISMIC
SET1
SET2
SET3
SPECTRAL
SPRING
SUMMER
TERRAIN
VIRIDIS
WINTER
WISTIA
YLGN
YLGNBU
YLORBR
YLORRD
JET
.
solveOptions
- Solver specific parameters.
REMOVE_PREVIOUS_RESTRICTIONS
: Ignore the
restrictions applied to the graph during
the creation stage and only use the
restrictions specified in this request if
set to TRUE
.
Supported values:
The default value is FALSE
.
RESTRICTION_THRESHOLD_VALUE
: Value-based
restriction comparison. Any node or edge
with a 'RESTRICTIONS_VALUECOMPARED' value
greater than the RESTRICTION_THRESHOLD_VALUE
will not be
included in the solution.
UNIFORM_WEIGHTS
: When specified, assigns
the given value to all the edges in the
graph. Note that weights provided in weightsOnEdges
will override this value.
Map
.contourOptions
- Solver specific parameters.
PROJECTION
: Spatial Reference System
(i.e. EPSG Code).
Supported values:
The default value is PLATE_CARREE
.
WIDTH
: When generateImage
is set
to TRUE
, width of the generated image. The
default value is '512'.
HEIGHT
: When generateImage
is
set to TRUE
, height of the generated image. If
the default value is used, the HEIGHT
is set to the value resulting
from multiplying the aspect ratio by the
WIDTH
. The default value is '-1'.
SEARCH_RADIUS
: When interpolating the
graph solution to generate the isochrone,
neighborhood of influence of sample data
(in percent of the image/grid). The
default value is '20'.
GRID_SIZE
: When interpolating the graph
solution to generate the isochrone,
number of subdivisions along the x axis
when building the grid (the y is computed
using the aspect ratio of the output
image). The default value is '100'.
COLOR_ISOLINES
: Color each isoline
according to the colormap; otherwise, use
the foreground color.
Supported values:
The default value is TRUE
.
ADD_LABELS
: If set to TRUE
, add labels to the isolines.
Supported values:
The default value is FALSE
.
LABELS_FONT_SIZE
: When ADD_LABELS
is set to TRUE
, size of the font (in pixels) to
use for labels. The default value is
'12'.
LABELS_FONT_FAMILY
: When ADD_LABELS
is set to TRUE
, font name to be used when adding
labels. The default value is 'arial'.
LABELS_SEARCH_WINDOW
: When ADD_LABELS
is set to TRUE
, a search window is used to rate
the local quality of each isoline.
Smooth, continuous, long stretches with
relatively flat angles are favored. The
provided value is multiplied by the
LABELS_FONT_SIZE
to calculate the final
window size. The default value is '4'.
LABELS_INTRALEVEL_SEPARATION
: When
ADD_LABELS
is set to TRUE
, this value determines the
distance (in multiples of the LABELS_FONT_SIZE
) to use when separating
labels of different values. The default
value is '4'.
LABELS_INTERLEVEL_SEPARATION
: When
ADD_LABELS
is set to TRUE
, this value determines the distance
(in percent of the total window size) to
use when separating labels of the same
value. The default value is '20'.
LABELS_MAX_ANGLE
: When ADD_LABELS
is set to TRUE
, maximum angle (in degrees) from
the vertical to use when adding labels.
The default value is '60'.
Map
.options
- Additional parameters.
SOLVE_TABLE
: Name of the table to host
intermediate solve results, in
[schema_name.]table_name format, using standard
name resolution rules and
meeting table naming criteria. This
table will contain the position and cost for
each vertex in the graph. If the default value
is used, a temporary table is created and
deleted once the solution is calculated. The
default value is ''.
IS_REPLICATED
: If set to TRUE
, replicate the SOLVE_TABLE
.
Supported values:
The default value is TRUE
.
DATA_MIN_X
: Lower bound for the x values. If
not provided, it will be computed from the
bounds of the input data.
DATA_MAX_X
: Upper bound for the x values. If
not provided, it will be computed from the
bounds of the input data.
DATA_MIN_Y
: Lower bound for the y values. If
not provided, it will be computed from the
bounds of the input data.
DATA_MAX_Y
: Upper bound for the y values. If
not provided, it will be computed from the
bounds of the input data.
CONCAVITY_LEVEL
: Factor to qualify the
concavity of the isochrone curves. The lower the
value, the more convex (with '0' being
completely convex and '1' being the most
concave). The default value is '0.5'. The
minimum allowed value is '0'. The maximum
allowed value is '1'.
USE_PRIORITY_QUEUE_SOLVERS
: sets the solver
methods explicitly if true.
Supported values:
TRUE
: uses the solvers scheduled for
'shortest_path' and
'inverse_shortest_path' based on
solve_direction
FALSE
: uses the solvers
'priority_queue' and
'inverse_priority_queue' based on
solve_direction
FALSE
.
SOLVE_DIRECTION
: Specify whether we are going
to the source node, or starting from it.
Supported values:
FROM_SOURCE
: Shortest path to get to
the source (inverse Dijkstra)
TO_SOURCE
: Shortest path to source
(Dijkstra)
FROM_SOURCE
.
Map
.Response
object containing
the results of the operation.GPUdbException
- if an error occurs during the operation.Copyright © 2025. All rights reserved.