Constructor
new GPUdb(url, optionsopt)
Creates a GPUdb API object for the specified URL using the given options.
Once created, all options are immutable; to use a different URL or change
options, create a new instance. (Creating a new instance does not
communicate with the server and should not cause performance concerns.)
Parameters:
Name |
Type |
Attributes |
Description |
url |
String
|
Array.<String>
|
|
The URL of the GPUdb server (e.g.,
http://hostname:9191 ). May also be specified as
a list of urls; all urls in the list must be well formed. |
options |
Object
|
<optional>
|
A set of configurable options for the GPUdb API.
Properties
Name |
Type |
Attributes |
Description |
username |
String
|
<optional>
|
The username to be used for authentication
to GPUdb. This username will be sent with every GPUdb request
made via the API along with the specified password and may be
used for authorization decisions by the server if it is so
configured. If neither username nor password is specified, no
authentication will be performed. |
password |
String
|
<optional>
|
The password to be used for authentication
to GPUdb. This password will be sent with every GPUdb request
made via the API along with the specified username and may be
used for authorization decisions by the server if it is so
configured. If neither username nor password is specified, no
authentication will be performed. |
timeout |
Number
|
<optional>
|
The timeout value, in milliseconds, after
which requests to GPUdb will be aborted. A timeout value of
zero is interpreted as an infinite timeout. Note that timeout
is not suppored for synchronous requests, which will not
return until a response is received and cannot be aborted. |
|
- Source:
Classes
- Type
Members
(readonly) api_version :String
The version number of the GPUdb JavaScript API.
Type:
- Source:
(readonly) END_OF_SET :Number
Constant used with certain requests to indicate that the maximum allowed
number of results should be returned.
Type:
- Source:
(readonly) password :String
The password used for authentication to GPUdb. Will be an empty
string if none was provided to the
GPUdb constructor.
Type:
- Source:
(readonly) timeout :Number
The timeout value, in milliseconds, after which requests to GPUdb
will be aborted. A timeout of zero is interpreted as an infinite
timeout. Will be zero if none was provided to the
GPUdb constructor.
Type:
- Source:
(readonly) url :String
The URL of the current GPUdb server.
Type:
- Source:
(readonly) urls :Array.<String>
The URLs of the GPUdb servers.
Type:
- Source:
(readonly) username :String
The username used for authentication to GPUdb. Will be an empty
string if none was provided to the
GPUdb contructor.
Type:
- Source:
Methods
(static) decode(o) → {Object|Array.<Object>}
Decodes a JSON string, or array of JSON strings, returned from GPUdb into
JSON object(s).
Parameters:
Name |
Type |
Description |
o |
String
|
Array.<String>
|
The JSON string(s) to decode. |
- Source:
Returns:
The decoded JSON object(s).
-
Type
-
Object
|
Array.<Object>
(static) decode_no_inf_nan(o) → {Object|Array.<Object>}
Decodes a JSON string, or array of JSON strings, returned from GPUdb into
JSON object(s). Special treatment for quoted "Infinity", "-Infinity",
and "NaN". Catches those and converts to null. This is significantly
slower than the regular decode function.
Parameters:
Name |
Type |
Description |
o |
String
|
Array.<String>
|
The JSON string(s) to decode. |
- Source:
Returns:
The decoded JSON object(s).
-
Type
-
Object
|
Array.<Object>
(static) decode_regular(o) → {Object|Array.<Object>}
Decodes a JSON string, or array of JSON strings, returned from GPUdb into
JSON object(s).
Parameters:
Name |
Type |
Description |
o |
String
|
Array.<String>
|
The JSON string(s) to decode. |
- Source:
Returns:
The decoded JSON object(s).
-
Type
-
Object
|
Array.<Object>
(static) encode(o) → {String|Array.<String>}
Encodes a JSON object, or array of JSON objects, into JSON string(s) to be
passed to GPUdb.
Parameters:
Name |
Type |
Description |
o |
Object
|
Array.<Object>
|
The JSON object(s) to encode. |
- Source:
Returns:
The encoded JSON string(s).
-
Type
-
String
|
Array.<String>
admin_add_ranks(hosts, config_params, options, callback) → {Object}
Add one or more new ranks to the Kinetica cluster. The new ranks will not
contain any data initially, other than replicated tables, and not be
assigned any shards. To rebalance data across the cluster, which includes
shifting some shard key assignments to newly added ranks, see
GPUdb#admin_rebalance
.
For example, if attempting to add three new ranks (two ranks on host
172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with
additional configuration parameters:
* hosts
would be an array including 172.123.45.67 in the first
two indices (signifying two ranks being added to host 172.123.45.67) and
172.123.45.68 in the last index (signifying one rank being added to host
172.123.45.67)
* config_params
would be an array of maps, with each map
corresponding to the ranks being added in hosts
. The key of
each map would be the configuration parameter name and the value would be
the parameter's value, e.g. 'rank.gpu':'1'
This endpoint's processing includes copying all replicated table data to the
new rank(s) and therefore could take a long time. The API call may time out
if run directly. It is recommended to run this endpoint asynchronously via
GPUdb#create_job
.
Parameters:
Name |
Type |
Description |
hosts |
Array.<String>
|
The IP address of each rank being added to the
cluster. Insert one entry per rank, even if they
are on the same host. The order of the hosts in the
array only matters as it relates to the
config_params . |
config_params |
Array.<Object>
|
Configuration parameters to apply to the
new ranks, e.g., which GPU to use.
Configuration parameters that start with
'rankN.', where N is the rank number,
should omit the N, as the new rank
number(s) are not allocated until the ranks
are created. Each entry in this array
corresponds to the entry at the same array
index in the hosts . This array
must either be completely empty or have the
same number of elements as the hosts array.
An empty array will result in the new ranks
being set only with default parameters. |
options |
Object
|
Optional parameters.
- 'dry_run': If
true , only
validation checks will be performed. No ranks are
added.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_add_ranks_request(request, callback) → {Object}
Add one or more new ranks to the Kinetica cluster. The new ranks will not
contain any data initially, other than replicated tables, and not be
assigned any shards. To rebalance data across the cluster, which includes
shifting some shard key assignments to newly added ranks, see
GPUdb#admin_rebalance
.
For example, if attempting to add three new ranks (two ranks on host
172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with
additional configuration parameters:
* hosts
would be an array including 172.123.45.67 in the first
two indices (signifying two ranks being added to host 172.123.45.67) and
172.123.45.68 in the last index (signifying one rank being added to host
172.123.45.67)
* config_params
would be an array of maps, with each map
corresponding to the ranks being added in hosts
. The key of
each map would be the configuration parameter name and the value would be
the parameter's value, e.g. 'rank.gpu':'1'
This endpoint's processing includes copying all replicated table data to the
new rank(s) and therefore could take a long time. The API call may time out
if run directly. It is recommended to run this endpoint asynchronously via
GPUdb#create_job
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_alter_jobs(job_ids, action, options, callback) → {Object}
Perform the requested action on a list of one or more job(s). Based on the
type of job and the current state of execution, the action may not be
successfully executed. The final result of the attempted actions for each
specified job is returned in the status array of the response. See
Job Manager for
more information.
Parameters:
Name |
Type |
Description |
job_ids |
Array.<Number>
|
Jobs to be modified. |
action |
String
|
Action to be performed on the jobs specified by
job_ids.
Supported values:
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_alter_jobs_request(request, callback) → {Object}
Perform the requested action on a list of one or more job(s). Based on the
type of job and the current state of execution, the action may not be
successfully executed. The final result of the attempted actions for each
specified job is returned in the status array of the response. See
Job Manager for
more information.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_offline(offline, options, callback) → {Object}
Take the system offline. When the system is offline, no user operations can
be performed with the exception of a system shutdown.
Parameters:
Name |
Type |
Description |
offline |
Boolean
|
Set to true if desired state is offline.
Supported values:
|
options |
Object
|
Optional parameters.
- 'flush_to_disk': Flush to disk when
going offline
Supported values:
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_offline_request(request, callback) → {Object}
Take the system offline. When the system is offline, no user operations can
be performed with the exception of a system shutdown.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_rebalance(options, callback) → {Object}
Rebalance the cluster so that all the nodes contain approximately an equal
number of records. The rebalance will also cause the shards to be equally
distributed (as much as possible) across all the ranks.
This endpoint may take a long time to run, depending on the amount of data
in the system. The API call may time out if run directly. It is recommended
to run this endpoint asynchronously via GPUdb#create_job
.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters.
- 'rebalance_sharded_data': If
true , sharded data will be rebalanced
approximately equally across the cluster. Note that
for big clusters, this data transfer could be time
consuming and result in delayed query responses.
Supported values:
The default value is 'true'.
- 'rebalance_unsharded_data': If
true , unsharded data (data without
primary keys and without shard keys) will be
rebalanced approximately equally across the
cluster. Note that for big clusters, this data
transfer could be time consuming and result in
delayed query responses.
Supported values:
The default value is 'true'.
- 'table_whitelist': Comma-separated
list of unsharded table names to rebalance. Not
applicable to sharded tables because they are
always balanced in accordance with their primary
key or shard key. Cannot be used simultaneously
with
table_blacklist .
- 'table_blacklist': Comma-separated
list of unsharded table names to not rebalance. Not
applicable to sharded tables because they are
always balanced in accordance with their primary
key or shard key. Cannot be used simultaneously
with
table_whitelist .
- 'aggressiveness': Influences how much
data to send per rebalance round. A higher
aggressiveness setting will complete the rebalance
faster. A lower aggressiveness setting will take
longer, but allow for better interleaving between
the rebalance and other queries. Allowed values are
1 through 10. The default value is '1'.
- 'compact_after_rebalance': Perform
compaction of deleted records once the rebalance
completes, to reclaim memory and disk space.
Default is true, unless
repair_incorrectly_sharded_data is set
to true .
Supported values:
The default value is 'true'.
- 'compact_only': Only perform
compaction, do not rebalance. Default is false.
Supported values:
The default value is 'false'.
- 'repair_incorrectly_sharded_data':
Scans for any data sharded incorrectly and
re-routes the correct location. This can be done as
part of a typical rebalance after expanding the
cluster, or in a standalone fashion when it is
believed that data is sharded incorrectly somewhere
in the cluster. Compaction will not be performed by
default when this is enabled. This option may also
lengthen rebalance time, and increase the memory
used by the rebalance.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_rebalance_request(request, callback) → {Object}
Rebalance the cluster so that all the nodes contain approximately an equal
number of records. The rebalance will also cause the shards to be equally
distributed (as much as possible) across all the ranks.
This endpoint may take a long time to run, depending on the amount of data
in the system. The API call may time out if run directly. It is recommended
to run this endpoint asynchronously via GPUdb#create_job
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_remove_ranks(ranks, options, callback) → {Object}
Remove one or more ranks from the cluster. All data in the ranks to be
removed is rebalanced to other ranks before the node is removed unless the
rebalance_sharded_data
or
rebalance_unsharded_data
parameters are set to
false
in the
options
.
Due to the rebalancing, this endpoint may take a long time to run, depending
on the amount of data in the system. The API call may time out if run
directly. It is recommended to run this endpoint asynchronously via
GPUdb#create_job
.
Parameters:
Name |
Type |
Description |
ranks |
Array.<Number>
|
Rank numbers of the ranks to be removed from the
cluster. |
options |
Object
|
Optional parameters.
- 'rebalance_sharded_data': When
true , data with primary keys or shard
keys will be rebalanced to other ranks prior to
rank removal. Note that for big clusters, this data
transfer could be time consuming and result in
delayed query responses.
Supported values:
The default value is 'true'.
- 'rebalance_unsharded_data': When
true , unsharded data (data without
primary keys and without shard keys) will be
rebalanced to other ranks prior to rank removal.
Note that for big clusters, this data transfer
could be time consuming and result in delayed query
responses.
Supported values:
The default value is 'true'.
- 'aggressiveness': Influences how much
data to send per rebalance round, during the
rebalance portion of removing ranks. A higher
aggressiveness setting will complete the rebalance
faster. A lower aggressiveness setting will take
longer, but allow for better interleaving between
the rebalance and other queries. Allowed values are
1 through 10. The default value is '1'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_remove_ranks_request(request, callback) → {Object}
Remove one or more ranks from the cluster. All data in the ranks to be
removed is rebalanced to other ranks before the node is removed unless the
rebalance_sharded_data
or
rebalance_unsharded_data
parameters are set to
false
in the
options
.
Due to the rebalancing, this endpoint may take a long time to run, depending
on the amount of data in the system. The API call may time out if run
directly. It is recommended to run this endpoint asynchronously via
GPUdb#create_job
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_show_alerts(num_alerts, options, callback) → {Object}
Requests a list of the most recent alerts.
Returns lists of alert data, including timestamp and type.
Parameters:
Name |
Type |
Description |
num_alerts |
Number
|
Number of most recent alerts to request. The
response will include up to
num_alerts depending on how many
alerts there are in the system. A value of 0
returns all stored alerts. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_show_alerts_request(request, callback) → {Object}
Requests a list of the most recent alerts.
Returns lists of alert data, including timestamp and type.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_show_cluster_operations(history_index, options, callback) → {Object}
Requests the detailed status of the current operation (by default) or a
prior cluster operation specified by
history_index
.
Returns details on the requested cluster operation.
The response will also indicate how many cluster operations are stored in
the history.
Parameters:
Name |
Type |
Description |
history_index |
Number
|
Indicates which cluster operation to
retrieve. Use 0 for the most recent. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_show_cluster_operations_request(request, callback) → {Object}
Requests the detailed status of the current operation (by default) or a
prior cluster operation specified by
history_index
.
Returns details on the requested cluster operation.
The response will also indicate how many cluster operations are stored in
the history.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_show_jobs(options, callback) → {Object}
Get a list of the current jobs in GPUdb.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters.
- 'show_async_jobs': If
true , then the completed async jobs
are also included in the response. By default, once
the async jobs are completed they are no longer
included in the jobs list.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_show_jobs_request(request, callback) → {Object}
Get a list of the current jobs in GPUdb.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_show_shards(options, callback) → {Object}
Show the mapping of shards to the corresponding rank and tom. The response
message contains list of 16384 (total number of shards in the system) Rank
and TOM numbers corresponding to each shard.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_show_shards_request(request, callback) → {Object}
Show the mapping of shards to the corresponding rank and tom. The response
message contains list of 16384 (total number of shards in the system) Rank
and TOM numbers corresponding to each shard.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_shutdown(exit_type, authorization, options, callback) → {Object}
Exits the database server application.
Parameters:
Name |
Type |
Description |
exit_type |
String
|
Reserved for future use. User can pass an empty
string. |
authorization |
String
|
No longer used. User can pass an empty
string. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_shutdown_request(request, callback) → {Object}
Exits the database server application.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_verify_db(options, callback) → {Object}
Verify database is in a consistent state. When inconsistencies or errors
are found, the verified_ok flag in the response is set to false and the list
of errors found is provided in the error_list.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters.
- 'rebuild_on_error':
Supported values:
The default value is 'false'.
- 'verify_nulls': When enabled, verifies
that null values are set to zero
Supported values:
The default value is 'false'.
- 'verify_persist':
Supported values:
The default value is 'false'.
- 'concurrent_safe': When enabled,
allows this endpoint to be run safely with other
concurrent database operations. Other operations
may be slower while this is running.
Supported values:
The default value is 'true'.
- 'verify_rank0': When enabled, compares
rank0 table meta-data against workers meta-data
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
admin_verify_db_request(request, callback) → {Object}
Verify database is in a consistent state. When inconsistencies or errors
are found, the verified_ok flag in the response is set to false and the list
of errors found is provided in the error_list.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_convex_hull(table_name, x_column_name, y_column_name, options, callback) → {Object}
Calculates and returns the convex hull for the values in a table specified
by table_name
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of table on which the operation will be
performed. Must be an existing table. It cannot
be a collection. |
x_column_name |
String
|
Name of the column containing the x
coordinates of the points for the operation
being performed. |
y_column_name |
String
|
Name of the column containing the y
coordinates of the points for the operation
being performed. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_convex_hull_request(request, callback) → {Object}
Calculates and returns the convex hull for the values in a table specified
by table_name
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_group_by(table_name, column_names, offset, limit, options, callback) → {Object}
Calculates unique combinations (groups) of values for the given columns in a
given table or view and computes aggregates on each unique combination. This
is somewhat analogous to an SQL-style SELECT...GROUP BY.
For aggregation details and examples, see Aggregation. For
limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except
unrestricted-length strings may be used for computing applicable aggregates;
columns marked as store-only are unable to be used in grouping or
aggregation.
The results can be paged via the offset
and limit
parameters. For example, to get 10 groups with the largest counts the inputs
would be: limit=10, options={"sort_order":"descending", "sort_by":"value"}.
options
can be used to customize behavior of this call e.g.
filtering or sorting the results.
To group by columns 'x' and 'y' and compute the number of objects within
each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use:
column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg,
mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min,
arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to having
.
The response is returned as a dynamic schema. For details see: dynamic schemas
documentation.
If a result_table
name is specified in the
options
, the results are stored in a new table with that
name--no results are returned in the response. Both the table name and
resulting column names must adhere to standard naming
conventions; column/aggregation expressions will need to be aliased. If
the source table's shard key is used as the grouping column(s) and all result
records are selected (offset
is 0 and limit
is
-9999), the result table will be sharded, in all other cases it will be
replicated. Sorting will properly function only if the result table is
replicated or if there is only one processing node and should not be relied
upon in other cases. Not available when any of the values of
column_names
is an unrestricted-length string.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of an existing table or view on which the
operation will be performed. |
column_names |
Array.<String>
|
List of one or more column names,
expressions, and aggregate expressions. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to
indicate that the max number of results should be
returned. The number of records returned will never
exceed the server's own limit, defined by the max_get_records_size parameter in
the server configuration. Use
has_more_records to see if more records
exist in the result to be fetched, and
offset & limit to request
subsequent pages of results. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the table specified
in
result_table . If the collection
provided is non-existent, the collection will be
automatically created. If empty, then the table
will be a top-level table.
- 'expression': Filter expression to
apply to the table prior to computing the aggregate
group by.
- 'having': Filter expression to apply
to the aggregated results.
- 'sort_order': String indicating how
the returned values should be sorted - ascending or
descending.
Supported values:
- 'ascending': Indicates that the
returned values should be sorted in ascending
order.
- 'descending': Indicates that the
returned values should be sorted in descending
order.
The default value is 'ascending'.
- 'sort_by': String determining how the
results are sorted.
Supported values:
- 'key': Indicates that the returned
values should be sorted by key, which corresponds
to the grouping columns. If you have multiple
grouping columns (and are sorting by key), it will
first sort the first grouping column, then the
second grouping column, etc.
- 'value': Indicates that the returned
values should be sorted by value, which corresponds
to the aggregates. If you have multiple aggregates
(and are sorting by value), it will first sort by
the first aggregate, then the second aggregate,
etc.
The default value is 'value'.
- 'result_table': The name of the table
used to store the results. Has the same naming
restrictions as tables. Column names (group-by
and aggregate fields) need to be given aliases e.g.
["FChar256 as fchar256", "sum(FDouble) as sfd"].
If present, no results are returned in the
response. This option is not available if one of
the grouping attributes is an unrestricted string
(i.e.; not charN) type.
- 'result_table_persist': If
true , then the result table specified
in result_table will be persisted and
will not expire unless a ttl is
specified. If false , then the result
table will be an in-memory table and will expire
unless a ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'result_table_force_replicated': Force
the result table to be replicated (ignores any
sharding). Must be used in combination with the
result_table option.
Supported values:
The default value is 'false'.
- 'result_table_generate_pk': If
true then set a primary key for the
result table. Must be used in combination with the
result_table option.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL of the table specified in
result_table .
- 'chunk_size': Indicates the number of
records per chunk to be used for the result table.
Must be used in combination with the
result_table option.
- 'create_indexes': Comma-separated list
of columns on which to create indexes on the result
table. Must be used in combination with the
result_table option.
- 'view_id': ID of view of which the
result table will be a member. The default value
is ''.
- 'materialize_on_gpu': No longer used.
See Resource Management Concepts for
information about how resources are managed, Tier
Strategy Concepts for how resources are
targeted for VRAM, and Tier Strategy Usage for how to
specify a table's priority in VRAM.
Supported values:
The default value is 'false'.
- 'pivot': pivot column
- 'pivot_values': The value list
provided will become the column headers in the
output. Should be the values from the pivot_column.
- 'grouping_sets': Customize the
grouping attribute sets to compute the aggregates.
These sets can include ROLLUP or CUBE operartors.
The attribute sets should be enclosed in
paranthesis and can include composite attributes.
All attributes specified in the grouping sets must
present in the groupby attributes.
- 'rollup': This option is used to
specify the multilevel aggregates.
- 'cube': This option is used to specify
the multidimensional aggregates.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_group_by_request(request, callback) → {Object}
Calculates unique combinations (groups) of values for the given columns in a
given table or view and computes aggregates on each unique combination. This
is somewhat analogous to an SQL-style SELECT...GROUP BY.
For aggregation details and examples, see Aggregation. For
limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except
unrestricted-length strings may be used for computing applicable aggregates;
columns marked as store-only are unable to be used in grouping or
aggregation.
The results can be paged via the offset
and limit
parameters. For example, to get 10 groups with the largest counts the inputs
would be: limit=10, options={"sort_order":"descending", "sort_by":"value"}.
options
can be used to customize behavior of this call e.g.
filtering or sorting the results.
To group by columns 'x' and 'y' and compute the number of objects within
each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use:
column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg,
mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min,
arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to having
.
The response is returned as a dynamic schema. For details see: dynamic schemas
documentation.
If a result_table
name is specified in the
options
, the results are stored in a new table with that
name--no results are returned in the response. Both the table name and
resulting column names must adhere to standard naming
conventions; column/aggregation expressions will need to be aliased. If
the source table's shard key is used as the grouping column(s) and all result
records are selected (offset
is 0 and limit
is
-9999), the result table will be sharded, in all other cases it will be
replicated. Sorting will properly function only if the result table is
replicated or if there is only one processing node and should not be relied
upon in other cases. Not available when any of the values of
column_names
is an unrestricted-length string.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_histogram(table_name, column_name, start, end, interval, options, callback) → {Object}
Performs a histogram calculation given a table, a column, and an interval
function. The
interval
is used to produce bins of that size and
the result, computed over the records falling within each bin, is returned.
For each bin, the start value is inclusive, but the end value is
exclusive--except for the very last bin for which the end value is also
inclusive. The value returned for each bin is the number of records in it,
except when a column name is provided as a
value_column
. In
this latter case the sum of the values corresponding to the
value_column
is used as the result instead. The total number
of bins requested cannot exceed 10,000.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based) build to service a request that specifies a
value_column
option.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the operation will be
performed. Must be an existing table or
collection. |
column_name |
String
|
Name of a column or an expression of one or
more column names over which the histogram will
be calculated. |
start |
Number
|
Lower end value of the histogram interval, inclusive. |
end |
Number
|
Upper end value of the histogram interval, inclusive. |
interval |
Number
|
The size of each bin within the start and end
parameters. |
options |
Object
|
Optional parameters.
- 'value_column': The name of the column
to use when calculating the bin values (values are
summed). The column must be a numerical type (int,
double, long, float).
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_histogram_request(request, callback) → {Object}
Performs a histogram calculation given a table, a column, and an interval
function. The
interval
is used to produce bins of that size and
the result, computed over the records falling within each bin, is returned.
For each bin, the start value is inclusive, but the end value is
exclusive--except for the very last bin for which the end value is also
inclusive. The value returned for each bin is the number of records in it,
except when a column name is provided as a
value_column
. In
this latter case the sum of the values corresponding to the
value_column
is used as the result instead. The total number
of bins requested cannot exceed 10,000.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based) build to service a request that specifies a
value_column
option.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_k_means(table_name, column_names, k, tolerance, options, callback) → {Object}
This endpoint runs the k-means algorithm - a heuristic algorithm that
attempts to do k-means clustering. An ideal k-means clustering algorithm
selects k points such that the sum of the mean squared distances of each
member of the set to the nearest of the k points is minimized. The k-means
algorithm however does not necessarily produce such an ideal cluster. It
begins with a randomly selected set of k points and then refines the
location of the points iteratively and settles to a local minimum. Various
parameters and options are provided to control the heuristic search.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based) build to service this request.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the operation will be
performed. Must be an existing table or
collection. |
column_names |
Array.<String>
|
List of column names on which the operation
would be performed. If n columns are
provided then each of the k result points
will have n dimensions corresponding to the
n columns. |
k |
Number
|
The number of mean points to be determined by the
algorithm. |
tolerance |
Number
|
Stop iterating when the distances between
successive points is less than the given
tolerance. |
options |
Object
|
Optional parameters.
- 'whiten': When set to 1 each of the
columns is first normalized by its stdv - default
is not to whiten.
- 'max_iters': Number of times to try to
hit the tolerance limit before giving up - default
is 10.
- 'num_tries': Number of times to run
the k-means algorithm with a different randomly
selected starting points - helps avoid local
minimum. Default is 1.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_k_means_request(request, callback) → {Object}
This endpoint runs the k-means algorithm - a heuristic algorithm that
attempts to do k-means clustering. An ideal k-means clustering algorithm
selects k points such that the sum of the mean squared distances of each
member of the set to the nearest of the k points is minimized. The k-means
algorithm however does not necessarily produce such an ideal cluster. It
begins with a randomly selected set of k points and then refines the
location of the points iteratively and settles to a local minimum. Various
parameters and options are provided to control the heuristic search.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based) build to service this request.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_min_max(table_name, column_name, options, callback) → {Object}
Calculates and returns the minimum and maximum values of a particular column
in a table.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the operation will be
performed. Must be an existing table. |
column_name |
String
|
Name of a column or an expression of one or
more column on which the min-max will be
calculated. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_min_max_geometry(table_name, column_name, options, callback) → {Object}
Calculates and returns the minimum and maximum x- and y-coordinates of a
particular geospatial geometry column in a table.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the operation will be
performed. Must be an existing table. |
column_name |
String
|
Name of a geospatial geometry column on which
the min-max will be calculated. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_min_max_geometry_request(request, callback) → {Object}
Calculates and returns the minimum and maximum x- and y-coordinates of a
particular geospatial geometry column in a table.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_min_max_request(request, callback) → {Object}
Calculates and returns the minimum and maximum values of a particular column
in a table.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_statistics(table_name, column_name, stats, options, callback) → {Object}
Calculates the requested statistics of the given column(s) in a given table.
The available statistics are count
(number of total objects),
mean
, stdv
(standard deviation),
variance
, skew
, kurtosis
,
sum
, min
, max
,
weighted_average
, cardinality
(unique count),
estimated_cardinality
, percentile
and
percentile_rank
.
Estimated cardinality is calculated by using the hyperloglog approximation
technique.
Percentiles and percentile ranks are approximate and are calculated using
the t-digest algorithm. They must include the desired
percentile
/percentile_rank
. To compute multiple
percentiles each value must be specified separately (i.e.
'percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)').
A second, comma-separated value can be added to the percentile
statistic to calculate percentile resolution, e.g., a 50th percentile with
200 resolution would be 'percentile(50,200)'.
The weighted average statistic requires a weight_column_name
to
be specified in options
. The weighted average is then defined
as the sum of the products of column_name
times the
weight_column_name
values divided by the sum of the
weight_column_name
values.
Additional columns can be used in the calculation of statistics via the
additional_column_names
option. Values in these columns will
be included in the overall aggregate calculation--individual aggregates will
not be calculated per additional column. For instance, requesting the
count
& mean
of column_name
x and
additional_column_names
y & z, where x holds the numbers 1-10,
y holds 11-20, and z holds 21-30, would return the total number of x, y, & z
values (30), and the single average value across all x, y, & z values
(15.5).
The response includes a list of key/value pairs of each statistic requested
and its corresponding value.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the statistics
operation will be performed. |
column_name |
String
|
Name of the primary column for which the
statistics are to be calculated. |
stats |
String
|
Comma separated list of the statistics to calculate,
e.g. "sum,mean".
Supported values:
- 'count': Number of objects (independent
of the given column(s)).
- 'mean': Arithmetic mean (average),
equivalent to sum/count.
- 'stdv': Sample standard deviation
(denominator is count-1).
- 'variance': Unbiased sample variance
(denominator is count-1).
- 'skew': Skewness (third standardized
moment).
- 'kurtosis': Kurtosis (fourth
standardized moment).
- 'sum': Sum of all values in the
column(s).
- 'min': Minimum value of the column(s).
- 'max': Maximum value of the column(s).
- 'weighted_average': Weighted arithmetic
mean (using the option
weight_column_name as the weighting
column).
- 'cardinality': Number of unique values
in the column(s).
- 'estimated_cardinality': Estimate (via
hyperloglog technique) of the number of unique values
in the column(s).
- 'percentile': Estimate (via t-digest) of
the given percentile of the column(s)
(percentile(50.0) will be an approximation of the
median). Add a second, comma-separated value to
calculate percentile resolution, e.g.,
'percentile(75,150)'
- 'percentile_rank': Estimate (via
t-digest) of the percentile rank of the given value
in the column(s) (if the given value is the median of
the column(s), percentile_rank() will return
approximately 50.0).
|
options |
Object
|
Optional parameters.
- 'additional_column_names': A list of
comma separated column names over which statistics
can be accumulated along with the primary column.
All columns listed and
column_name
must be of the same type. Must not include the
column specified in column_name and no
column can be listed twice.
- 'weight_column_name': Name of column
used as weighting attribute for the weighted
average statistic.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_statistics_by_range(table_name, select_expression, column_name, value_column_name, stats, start, end, interval, options, callback) → {Object}
Divides the given set into bins and calculates statistics of the values of a
value-column in each bin. The bins are based on the values of a given
binning-column. The statistics that may be requested are mean, stdv
(standard deviation), variance, skew, kurtosis, sum, min, max, first, last
and weighted average. In addition to the requested statistics the count of
total samples in each bin is returned. This counts vector is just the
histogram of the column used to divide the set members into bins. The
weighted average statistic requires a weight_column to be specified in
options
. The weighted average is then defined as the sum of the
products of the value column times the weight column divided by the sum of
the weight column.
There are two methods for binning the set members. In the first, which can
be used for numeric valued binning-columns, a min, max and interval are
specified. The number of bins, nbins, is the integer upper bound of
(max-min)/interval. Values that fall in the range
[min+n*interval,min+(n+1)*interval) are placed in the nth bin where n ranges
from 0..nbin-2. The final bin is [min+(nbin-1)*interval,max]. In the second
method, options
bin_values specifies a list of binning column
values. Binning-columns whose value matches the nth member of the bin_values
list are placed in the nth bin. When a list is provided the binning-column
must be of type string or int.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based) build to service this request.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the ranged-statistics
operation will be performed. |
select_expression |
String
|
For a non-empty expression statistics are
calculated for those records for which
the expression is true. |
column_name |
String
|
Name of the binning-column used to divide the
set samples into bins. |
value_column_name |
String
|
Name of the value-column for which
statistics are to be computed. |
stats |
String
|
A string of comma separated list of the statistics to
calculate, e.g. 'sum,mean'. Available statistics:
mean, stdv (standard deviation), variance, skew,
kurtosis, sum. |
start |
Number
|
The lower bound of the binning-column. |
end |
Number
|
The upper bound of the binning-column. |
interval |
Number
|
The interval of a bin. Set members fall into bin i
if the binning-column falls in the range
[start+interval*i, start+interval*(i+1)). |
options |
Object
|
Map of optional parameters:
- 'additional_column_names': A list of
comma separated value-column names over which
statistics can be accumulated along with the
primary value_column.
- 'bin_values': A list of comma
separated binning-column values. Values that match
the nth bin_values value are placed in the nth bin.
- 'weight_column_name': Name of the
column used as weighting column for the
weighted_average statistic.
- 'order_column_name': Name of the
column used for candlestick charting techniques.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_statistics_by_range_request(request, callback) → {Object}
Divides the given set into bins and calculates statistics of the values of a
value-column in each bin. The bins are based on the values of a given
binning-column. The statistics that may be requested are mean, stdv
(standard deviation), variance, skew, kurtosis, sum, min, max, first, last
and weighted average. In addition to the requested statistics the count of
total samples in each bin is returned. This counts vector is just the
histogram of the column used to divide the set members into bins. The
weighted average statistic requires a weight_column to be specified in
options
. The weighted average is then defined as the sum of the
products of the value column times the weight column divided by the sum of
the weight column.
There are two methods for binning the set members. In the first, which can
be used for numeric valued binning-columns, a min, max and interval are
specified. The number of bins, nbins, is the integer upper bound of
(max-min)/interval. Values that fall in the range
[min+n*interval,min+(n+1)*interval) are placed in the nth bin where n ranges
from 0..nbin-2. The final bin is [min+(nbin-1)*interval,max]. In the second
method, options
bin_values specifies a list of binning column
values. Binning-columns whose value matches the nth member of the bin_values
list are placed in the nth bin. When a list is provided the binning-column
must be of type string or int.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based) build to service this request.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_statistics_request(request, callback) → {Object}
Calculates the requested statistics of the given column(s) in a given table.
The available statistics are count
(number of total objects),
mean
, stdv
(standard deviation),
variance
, skew
, kurtosis
,
sum
, min
, max
,
weighted_average
, cardinality
(unique count),
estimated_cardinality
, percentile
and
percentile_rank
.
Estimated cardinality is calculated by using the hyperloglog approximation
technique.
Percentiles and percentile ranks are approximate and are calculated using
the t-digest algorithm. They must include the desired
percentile
/percentile_rank
. To compute multiple
percentiles each value must be specified separately (i.e.
'percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)').
A second, comma-separated value can be added to the percentile
statistic to calculate percentile resolution, e.g., a 50th percentile with
200 resolution would be 'percentile(50,200)'.
The weighted average statistic requires a weight_column_name
to
be specified in options
. The weighted average is then defined
as the sum of the products of column_name
times the
weight_column_name
values divided by the sum of the
weight_column_name
values.
Additional columns can be used in the calculation of statistics via the
additional_column_names
option. Values in these columns will
be included in the overall aggregate calculation--individual aggregates will
not be calculated per additional column. For instance, requesting the
count
& mean
of column_name
x and
additional_column_names
y & z, where x holds the numbers 1-10,
y holds 11-20, and z holds 21-30, would return the total number of x, y, & z
values (30), and the single average value across all x, y, & z values
(15.5).
The response includes a list of key/value pairs of each statistic requested
and its corresponding value.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_unique(table_name, column_name, offset, limit, options, callback) → {Object}
Returns all the unique values from a particular column (specified by
column_name
) of a particular table or view (specified by
table_name
). If
column_name
is a numeric column
the values will be in
binary_encoded_response
. Otherwise if
column_name
is a string column the values will be in
json_encoded_response
. The results can be paged via the
offset
and
limit
parameters.
Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order
options
would be::
{"limit":"10","sort_order":"descending"}.
The response is returned as a dynamic schema. For details see: dynamic schemas
documentation.
If a result_table
name is specified in the
options
, the results are stored in a new table with that
name--no results are returned in the response. Both the table name and
resulting column name must adhere to standard naming
conventions; any column expression will need to be aliased. If the
source table's shard key is used as the column_name
, the
result table will be sharded, in all other cases it will be replicated.
Sorting will properly function only if the result table is replicated or if
there is only one processing node and should not be relied upon in other
cases. Not available if the value of column_name
is an
unrestricted-length string.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of an existing table or view on which the
operation will be performed. |
column_name |
String
|
Name of the column or an expression containing
one or more column names on which the unique
function would be applied. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned. Or END_OF_SET (-9999) to
indicate that the max number of results should be
returned. The number of records returned will never
exceed the server's own limit, defined by the max_get_records_size parameter in
the server configuration. Use
has_more_records to see if more records
exist in the result to be fetched, and
offset & limit to request
subsequent pages of results. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the table specified
in
result_table . If the collection
provided is non-existent, the collection will be
automatically created. If empty, then the table
will be a top-level table.
- 'expression': Optional filter
expression to apply to the table.
- 'sort_order': String indicating how
the returned values should be sorted.
Supported values:
The default value is 'ascending'.
- 'result_table': The name of the table
used to store the results. If present, no results
are returned in the response. Has the same naming
restrictions as tables. Not available if
column_name is an unrestricted-length
string.
- 'result_table_persist': If
true , then the result table specified
in result_table will be persisted and
will not expire unless a ttl is
specified. If false , then the result
table will be an in-memory table and will expire
unless a ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'result_table_force_replicated': Force
the result table to be replicated (ignores any
sharding). Must be used in combination with the
result_table option.
Supported values:
The default value is 'false'.
- 'result_table_generate_pk': If
true then set a primary key for the
result table. Must be used in combination with the
result_table option.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL of the table specified in
result_table .
- 'chunk_size': Indicates the number of
records per chunk to be used for the result table.
Must be used in combination with the
result_table option.
- 'view_id': ID of view of which the
result table will be a member. The default value
is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_unique_request(request, callback) → {Object}
Returns all the unique values from a particular column (specified by
column_name
) of a particular table or view (specified by
table_name
). If
column_name
is a numeric column
the values will be in
binary_encoded_response
. Otherwise if
column_name
is a string column the values will be in
json_encoded_response
. The results can be paged via the
offset
and
limit
parameters.
Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order
options
would be::
{"limit":"10","sort_order":"descending"}.
The response is returned as a dynamic schema. For details see: dynamic schemas
documentation.
If a result_table
name is specified in the
options
, the results are stored in a new table with that
name--no results are returned in the response. Both the table name and
resulting column name must adhere to standard naming
conventions; any column expression will need to be aliased. If the
source table's shard key is used as the column_name
, the
result table will be sharded, in all other cases it will be replicated.
Sorting will properly function only if the result table is replicated or if
there is only one processing node and should not be relied upon in other
cases. Not available if the value of column_name
is an
unrestricted-length string.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_unpivot(table_name, column_names, variable_column_name, value_column_name, pivoted_columns, options, callback) → {Object}
Rotate the column values into rows values.
For unpivot details and examples, see Unpivot. For limitations, see Unpivot
Limitations.
Unpivot is used to normalize tables that are built for cross tabular
reporting purposes. The unpivot operator rotates the column values for all
the pivoted columns. A variable column, value column and all columns from
the source table except the unpivot columns are projected into the result
table. The variable column and value columns in the result table indicate
the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas
documentation.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the operation will be
performed. Must be an existing table/view. |
column_names |
Array.<String>
|
List of column names or expressions. A
wildcard '*' can be used to include all the
non-pivoted columns from the source table. |
variable_column_name |
String
|
Specifies the variable/parameter
column name. |
value_column_name |
String
|
Specifies the value column name. |
pivoted_columns |
Array.<String>
|
List of one or more values typically the
column names of the input table. All the
columns in the source table must have the
same data type. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the table specified
in
result_table . If the collection
provided is non-existent, the collection will be
automatically created. If empty, then the table
will be a top-level table.
- 'result_table': The name of the table
used to store the results. Has the same naming
restrictions as tables. If present, no results
are returned in the response.
- 'result_table_persist': If
true , then the result table specified
in result_table will be persisted and
will not expire unless a ttl is
specified. If false , then the result
table will be an in-memory table and will expire
unless a ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'expression': Filter expression to
apply to the table prior to unpivot processing.
- 'order_by': Comma-separated list of
the columns to be sorted by; e.g. 'timestamp asc, x
desc'. The columns specified must be present in
input table. If any alias is given for any column
name, the alias must be used, rather than the
original column name. The default value is ''.
- 'chunk_size': Indicates the number of
records per chunk to be used for the result table.
Must be used in combination with the
result_table option.
- 'limit': The number of records to
keep. The default value is ''.
- 'ttl': Sets the TTL of the table specified in
result_table .
- 'view_id': view this result table is
part of. The default value is ''.
- 'materialize_on_gpu': No longer used.
See Resource Management Concepts for
information about how resources are managed, Tier
Strategy Concepts for how resources are
targeted for VRAM, and Tier Strategy Usage for how to
specify a table's priority in VRAM.
Supported values:
The default value is 'false'.
- 'create_indexes': Comma-separated list
of columns on which to create indexes on the table
specified in
result_table . The columns
specified must be present in output column names.
If any alias is given for any column name, the
alias must be used, rather than the original column
name.
- 'result_table_force_replicated': Force
the result table to be replicated (ignores any
sharding). Must be used in combination with the
result_table option.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
aggregate_unpivot_request(request, callback) → {Object}
Rotate the column values into rows values.
For unpivot details and examples, see Unpivot. For limitations, see Unpivot
Limitations.
Unpivot is used to normalize tables that are built for cross tabular
reporting purposes. The unpivot operator rotates the column values for all
the pivoted columns. A variable column, value column and all columns from
the source table except the unpivot columns are projected into the result
table. The variable column and value columns in the result table indicate
the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas
documentation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_resource_group(name, tier_attributes, ranking, adjoining_resource_group, options, callback) → {Object}
Alters the properties of an exisiting resource group to facilitate resource
management.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the group to be altered. Must be an existing
resource group name. |
tier_attributes |
Object
|
Optional map containing tier names and
their respective attribute group limits.
The only valid attribute limit that can be
set is max_memory (in bytes) for the VRAM &
RAM tiers.
For instance, to set max VRAM capacity to
1GB and max RAM capacity to 10GB, use:
{'VRAM':{'max_memory':'1000000000'},
'RAM':{'max_memory':'10000000000'}}
- 'max_memory': Maximum amount
of memory usable in the given tier at one
time for this group.
|
ranking |
String
|
If the resource group ranking is to be updated,
this indicates the relative ranking among existing
resource groups where this resource group will be
moved; leave blank if not changing the ranking.
When using before or
after , specify which resource group
this one will be inserted before or after in
adjoining_resource_group .
Supported values:
- ''
- 'first'
- 'last'
- 'before'
- 'after'
The default value is ''. |
adjoining_resource_group |
String
|
If ranking is
before or
after , this field
indicates the resource group
before or after which the current
group will be placed; otherwise,
leave blank. |
options |
Object
|
Optional parameters.
- 'max_cpu_concurrency': Maximum number
of simultaneous threads that will be used to
execute a request for this group.
- 'max_scheduling_priority': Maximum
priority of a scheduled task for this group.
- 'max_tier_priority': Maximum priority
of a tiered object for this group.
- 'is_default_group': If
true , this request applies to the
global default resource group. It is an error for
this field to be true when the
name field is also populated.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_resource_group_request(request, callback) → {Object}
Alters the properties of an exisiting resource group to facilitate resource
management.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_role(name, action, value, options, callback) → {Object}
Alters a Role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the role to be altered. Must be an existing
role. |
action |
String
|
Modification operation to be applied to the role.
Supported values:
- 'set_resource_group': Sets the resource
group for an internal role. The resource group must
exist, otherwise, an empty string assigns the role
to the default resource group.
|
value |
String
|
The value of the modification, depending on
action . |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_role_request(request, callback) → {Object}
Alters a Role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_system_properties(property_updates_map, options, callback) → {Object}
The
GPUdb#alter_system_properties
endpoint is primarily used
to simplify the testing of the system and is not expected to be used during
normal execution. Commands are given through the
property_updates_map
whose keys are commands and values are
strings representing integer values (for example '8000') or boolean values
('true' or 'false').
Parameters:
Name |
Type |
Description |
property_updates_map |
Object
|
Map containing the properties of the
system to be updated. Error if empty.
- 'sm_omp_threads': Set the
number of OpenMP threads that will be
used to service filter & aggregation
requests against collections to the
specified integer value.
- 'kernel_omp_threads': Set
the number of kernel OpenMP threads to
the specified integer value.
-
'concurrent_kernel_execution': Enables
concurrent kernel execution if the
value is
true and
disables it if the value is
false .
Supported values:
-
'subtask_concurrency_limit': Sets the
maximum number of simultaneous threads
allocated to a given request, on each
rank. Note that thread allocation may
also be limted by resource group
limits and/or system load.
- 'chunk_size': Sets the
number of records per chunk to be used
for all new tables.
- 'evict_columns': Attempts
to evict columns from memory to the
persistent store. Value string is a
semicolon separated list of entries,
each entry being a table name
optionally followed by a comma and a
comma separated list of column names
to attempt to evict. An empty value
string will attempt to evict all
tables and columns.
- 'execution_mode': Sets
the execution_mode for kernel
executions to the specified string
value. Possible values are host,
device, default (engine decides) or an
integer value that indicates max chunk
size to exec on host
-
'external_files_directory': Sets the
root directory path where external
table data files are accessed from.
Path must exist on the head node
- 'flush_to_disk': Flushes
any changes to any tables to the
persistent store. These changes
include updates to the vector store,
object store, and text search store,
Value string is ignored
- 'clear_cache': Clears
cached results. Useful to allow
repeated timing of endpoints. Value
string is the name of the table for
which to clear the cached results, or
an empty string to clear the cached
results for all tables.
- 'communicator_test':
Invoke the communicator test and
report timing results. Value string is
is a semicolon separated list of
[key]=[value] expressions.
Expressions are:
num_transactions=[num] where num is
the number of request reply
transactions to invoke per test;
message_size=[bytes] where bytes is
the size in bytes of the messages to
send; check_values=[enabled] where if
enabled is true the value of the
messages received are verified.
-
'set_message_timers_enabled': Enables
the communicator test to collect
additional timing statistics when the
value string is
true .
Disables the collection when the value
string is false
Supported values:
- 'network_speed': Invoke
the network speed test and report
timing results. Value string is a
semicolon-separated list of
[key]=[value] expressions. Valid
expressions are: seconds=[time] where
time is the time in seconds to run the
test; data_size=[bytes] where bytes is
the size in bytes of the block to be
transferred; threads=[number of
threads]; to_ranks=[space-separated
list of ranks] where the list of ranks
is the ranks that rank 0 will send
data to and get data from. If to_ranks
is unspecified then all worker ranks
are used.
- 'request_timeout': Number
of minutes after which filtering
(e.g.,
GPUdb#filter ) and
aggregating (e.g.,
GPUdb#aggregate_group_by )
queries will timeout. The default
value is '20'.
- 'max_get_records_size':
The maximum number of records the
database will serve for a given data
retrieval call. The default value is
'20000'.
- 'enable_audit': Enable or
disable auditing.
- 'audit_headers': Enable
or disable auditing of request
headers.
- 'audit_body': Enable or
disable auditing of request bodies.
- 'audit_data': Enable or
disable auditing of request data.
- 'shadow_agg_size': Size
of the shadow aggregate chunk cache in
bytes. The default value is
'10000000'.
- 'shadow_filter_size':
Size of the shdow filter chunk cache
in bytes. The default value is
'10000000'.
-
'synchronous_compression': compress
vector on set_compression (instead of
waiting for background thread). The
default value is 'false'.
-
'enable_overlapped_equi_join': Enable
overlapped-equi-join filter. The
default value is 'true'.
-
'enable_compound_equi_join': Enable
compound-equi-join filter plan type.
The default value is 'false'.
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_system_properties_request(request, callback) → {Object}
The
GPUdb#alter_system_properties
endpoint is primarily used
to simplify the testing of the system and is not expected to be used during
normal execution. Commands are given through the
property_updates_map
whose keys are commands and values are
strings representing integer values (for example '8000') or boolean values
('true' or 'false').
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_table(table_name, action, value, options, callback) → {Object}
Apply various modifications to a table, view, or collection. The
available modifications include the following:
Manage a table's columns--a column can be added, removed, or have its
type and properties
modified, including
whether it is compressed or not.
Create or delete an index on a
particular column. This can speed up certain operations when using
expressions
containing equality or relational operators on indexed columns. This only
applies to tables.
Create or delete a foreign key
on a particular column.
Manage a
range-partitioned or a
manual list-partitioned
table's partitions.
Set (or reset) the tier strategy
of a table or view.
Refresh and manage the refresh mode of a
materialized
view.
Set the time-to-live
(TTL). This can be applied
to tables, views, or collections. When applied to collections, every
contained
table & view that is not protected will have its TTL set to the given value.
Set the global access mode (i.e. locking) for a table. This setting trumps
any
role-based access controls that may be in place; e.g., a user with write
access
to a table marked read-only will not be able to insert records into it. The
mode
can be set to read-only, write-only, read/write, and no access.
Change the protection mode to prevent or
allow automatic expiration. This can be applied to tables, views, and
collections.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Table on which the operation will be performed.
Must be an existing table, view, or collection. |
action |
String
|
Modification operation to be applied
Supported values:
- 'allow_homogeneous_tables': No longer
supported; action will be ignored.
- 'create_index': Creates either a column (attribute) index or chunk skip index, depending on the
specified
index_type , on the column
name specified in value . If this column
already has the specified index, an error will be
returned.
- 'delete_index': Deletes either a column (attribute) index or chunk skip index, depending on the
specified
index_type , on the column
name specified in value . If this column
does not have the specified index, an error will be
returned.
- 'move_to_collection': Moves a table or
view into a collection named
value . If
the collection provided is non-existent, the
collection will be automatically created. If
value is empty, then the table or view
will be top-level.
- 'protected': Sets whether the given
table_name should be protected or not. The
value must be either 'true' or 'false'.
- 'rename_table': Renames a table, view
or collection to
value . Has the same
naming restrictions as tables.
- 'ttl': Sets the time-to-live in minutes of the
table, view, or collection specified in
table_name .
- 'add_column': Adds the column specified
in
value to the table specified in
table_name . Use
column_type and
column_properties in
options to set the column's type and
properties, respectively.
- 'change_column': Changes type and
properties of the column specified in
value . Use column_type
and column_properties in
options to set the column's type and
properties, respectively. Note that primary key
and/or shard key columns cannot be changed. All
unchanging column properties must be listed for the
change to take place, e.g., to add dictionary
encoding to an existing 'char4' column, both 'char4'
and 'dict' must be specified in the
options map.
- 'set_column_compression': Modifies the
compression setting on the column
specified in
value to the compression
type specified in compression_type .
- 'delete_column': Deletes the column
specified in
value from the table
specified in table_name .
- 'create_foreign_key': Creates a foreign key specified in
value using the format
'(source_column_name [, ...]) references
target_table_name(primary_key_column_name [, ...])
[as foreign_key_name]'.
- 'delete_foreign_key': Deletes a foreign key. The
value should be the foreign_key_name
specified when creating the key or the complete
string used to define it.
- 'add_partition': Adds the partition
specified in
value , to either a range-partitioned or manual list-partitioned table.
- 'remove_partition': Removes the
partition specified in
value (and
relocates all of its data to the default partition)
from either a range-partitioned or manual list-partitioned table.
- 'delete_partition': Deletes the
partition specified in
value (and all
of its data) from either a range-partitioned or manual list-partitioned table.
- 'set_global_access_mode': Sets the
global access mode (i.e. locking) for the table
specified in
table_name . Specify the
access mode in value . Valid modes are
'no_access', 'read_only', 'write_only' and
'read_write'.
- 'refresh': Replays all the table
creation commands required to create this materialized view.
- 'set_refresh_method': Sets the method
by which this materialized view is refreshed to
the method specified in
value - one of
'manual', 'periodic', 'on_change'.
- 'set_refresh_start_time': Sets the time
to start periodic refreshes of this materialized view to the datetime
string specified in
value with format
'YYYY-MM-DD HH:MM:SS'. Subsequent refreshes occur
at the specified time + N * the refresh period.
- 'set_refresh_period': Sets the time
interval in seconds at which to refresh this materialized view to the value
specified in
value . Also, sets the
refresh method to periodic if not already set.
- 'remove_text_search_attributes':
Removes text search attribute from all
columns.
- 'set_strategy_definition': Sets the tier strategy for the table and
its columns to the one specified in
value , replacing the existing tier
strategy in its entirety. See tier strategy usage for format and
tier strategy examples for
examples.
|
value |
String
|
The value of the modification, depending on
action . For example, if
action is add_column , this
would be the column name; while the column's
definition would be covered by the
column_type ,
column_properties ,
column_default_value , and
add_column_expression in
options . If action is
ttl , it would be the number of minutes
for the new TTL. If action is
refresh , this field would be blank. |
options |
Object
|
Optional parameters.
- 'action':
- 'column_name':
- 'table_name':
- 'column_default_value': When adding a
column, set a default value for existing records.
For nullable columns, the default value will be
null, regardless of data type.
- 'column_properties': When adding or
changing a column, set the column properties
(strings, separated by a comma: data, store_only,
text_search, char8, int8 etc).
- 'column_type': When adding or changing
a column, set the column type (strings, separated
by a comma: int, double, string, null etc).
- 'compression_type': When setting
column compression
(
set_column_compression for
action ), compression type to use:
none (to use no compression) or a
valid compression type.
Supported values:
- 'none'
- 'snappy'
- 'lz4'
- 'lz4hc'
The default value is 'snappy'.
- 'copy_values_from_column': Deprecated.
Please use
add_column_expression
instead.
- 'rename_column': When changing a
column, specify new column name.
- 'validate_change_column': When
changing a column, validate the change before
applying it. If
true , then validate
all values. A value too large (or too long) for the
new type will prevent any change. If
false , then when a value is too large
or long, it will be truncated.
Supported values:
- 'true': true
- 'false': false
The default value is 'true'.
- 'update_last_access_time': Indicates
whether the time-to-live (TTL) expiration
countdown timer should be reset to the table's TTL.
Supported values:
- 'true': Reset the expiration countdown
timer to the table's configured TTL.
- 'false': Don't reset the timer;
expiration countdown will continue from where it
is, as if the table had not been accessed.
The default value is 'true'.
- 'add_column_expression': When adding a
column, an optional expression to use for the new
column's values. Any valid expression may be used,
including one containing references to existing
columns in the same table.
- 'strategy_definition': Optional
parameter for specifying the tier strategy for the table and
its columns when
action is
set_strategy_definition , replacing the
existing tier strategy in its entirety. See tier strategy usage for format
and tier strategy examples for
examples. This option will be ignored if
value is also specified.
- 'index_type': Type of index to create,
when
action is
create_index , or to delete, when
action is delete_index .
Supported values:
The default value is 'column'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_table_columns(table_name, column_alterations, options, callback) → {Object}
Apply various modifications to columns in a table, view. The available
modifications include the following:
Create or delete an index on a
particular column. This can speed up certain operations when using
expressions
containing equality or relational operators on indexed columns. This only
applies to tables.
Manage a table's columns--a column can be added, removed, or have its
type and properties
modified.
Set or unset compression for a column.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Table on which the operation will be performed.
Must be an existing table or view. |
column_alterations |
Array.<Object>
|
list of alter table add/delete/change
column requests - all for the same
table.
each request is a map
that includes 'column_name', 'action'
and the options specific for the
action,
note that the same
options as in alter table requests but
in the same map as the column name and
the action. For example:
[{'column_name':'col_1','action':'change_column','rename_column':'col_2'},
{'column_name':'col_1','action':'add_column',
'type':'int','default_value':'1'}
] |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_table_columns_request(request, callback) → {Object}
Apply various modifications to columns in a table, view. The available
modifications include the following:
Create or delete an index on a
particular column. This can speed up certain operations when using
expressions
containing equality or relational operators on indexed columns. This only
applies to tables.
Manage a table's columns--a column can be added, removed, or have its
type and properties
modified.
Set or unset compression for a column.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
Updates (adds or changes) metadata for tables. The metadata key and values
must both be strings. This is an easy way to annotate whole tables rather
than single records within tables. Some examples of metadata are owner of
the table, table creation timestamp etc.
Parameters:
Name |
Type |
Description |
table_names |
Array.<String>
|
Names of the tables whose metadata will be
updated. All specified tables must exist, or
an error will be returned. |
metadata_map |
Object
|
A map which contains the metadata of the
tables that are to be updated. Note that only
one map is provided for all the tables; so the
change will be applied to every table. If the
provided map is empty, then all existing
metadata for the table(s) will be cleared. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
Updates (adds or changes) metadata for tables. The metadata key and values
must both be strings. This is an easy way to annotate whole tables rather
than single records within tables. Some examples of metadata are owner of
the table, table creation timestamp etc.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_table_request(request, callback) → {Object}
Apply various modifications to a table, view, or collection. The
available modifications include the following:
Manage a table's columns--a column can be added, removed, or have its
type and properties
modified, including
whether it is compressed or not.
Create or delete an index on a
particular column. This can speed up certain operations when using
expressions
containing equality or relational operators on indexed columns. This only
applies to tables.
Create or delete a foreign key
on a particular column.
Manage a
range-partitioned or a
manual list-partitioned
table's partitions.
Set (or reset) the tier strategy
of a table or view.
Refresh and manage the refresh mode of a
materialized
view.
Set the time-to-live
(TTL). This can be applied
to tables, views, or collections. When applied to collections, every
contained
table & view that is not protected will have its TTL set to the given value.
Set the global access mode (i.e. locking) for a table. This setting trumps
any
role-based access controls that may be in place; e.g., a user with write
access
to a table marked read-only will not be able to insert records into it. The
mode
can be set to read-only, write-only, read/write, and no access.
Change the protection mode to prevent or
allow automatic expiration. This can be applied to tables, views, and
collections.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_tier(name, options, callback) → {Object}
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the tier to be altered. Must be an existing
tier group name. |
options |
Object
|
Optional parameters.
- 'capacity': Maximum size in bytes this
tier may hold at once.
- 'high_watermark': Threshold of usage
of this tier's resource that, once exceeded, will
trigger watermark-based eviction from this tier.
- 'low_watermark': Threshold of resource
usage that, once fallen below after crossing the
high_watermark , will cease
watermark-based eviction from this tier.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_tier_request(request, callback) → {Object}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_user(name, action, value, options, callback) → {Object}
Alters a user.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user to be altered. Must be an existing
user. |
action |
String
|
Modification operation to be applied to the user.
Supported values:
- 'set_password': Sets the password of
the user. The user must be an internal user.
- 'set_resource_group': Sets the resource
group for an internal user. The resource group must
exist, otherwise, an empty string assigns the user
to the default resource group.
|
value |
String
|
The value of the modification, depending on
action . |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
alter_user_request(request, callback) → {Object}
Alters a user.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
append_records(table_name, source_table_name, field_map, options, callback) → {Object}
Append (or insert) all records from a source table (specified by
source_table_name
) to a particular target table (specified by
table_name
). The field map (specified by
field_map
) holds the user specified map of target table column
names with their mapped source column names.
Parameters:
Name |
Type |
Description |
table_name |
String
|
The table name for the records to be appended.
Must be an existing table. |
source_table_name |
String
|
The source table name to get records
from. Must be an existing table name. |
field_map |
Object
|
Contains the mapping of column names from the
target table (specified by
table_name ) as the keys, and
corresponding column names or expressions (e.g.,
'col_name+1') from the source table (specified by
source_table_name ). Must be existing
column names in source table and target table,
and their types must be matched. For details on
using expressions, see Expressions. |
options |
Object
|
Optional parameters.
- 'offset': A positive integer
indicating the number of initial results to skip
from
source_table_name . Default is 0.
The minimum allowed value is 0. The maximum allowed
value is MAX_INT. The default value is '0'.
- 'limit': A positive integer indicating
the maximum number of results to be returned from
source_table_name . Or END_OF_SET
(-9999) to indicate that the max number of results
should be returned. The default value is '-9999'.
- 'expression': Optional filter
expression to apply to the
source_table_name . The default value
is ''.
- 'order_by': Comma-separated list of
the columns to be sorted by from source table
(specified by
source_table_name ),
e.g., 'timestamp asc, x desc'. The
order_by columns do not have to be
present in field_map . The default
value is ''.
- 'update_on_existing_pk': Specifies the
record collision policy for inserting the source
table records (specified by
source_table_name ) into the target
table (specified by table_name ) table
with a primary key. If set to
true , any existing target table record
with primary key values that match those of a
source table record being inserted will be replaced
by that new record. If set to false ,
any existing target table record with primary key
values that match those of a source table record
being inserted will remain unchanged and the new
record discarded. If the specified table does not
have a primary key, then this option is ignored.
Supported values:
The default value is 'false'.
- 'truncate_strings': If set to
true , it allows inserting longer
strings into smaller charN string columns by
truncating the longer strings to fit.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
append_records_request(request, callback) → {Object}
Append (or insert) all records from a source table (specified by
source_table_name
) to a particular target table (specified by
table_name
). The field map (specified by
field_map
) holds the user specified map of target table column
names with their mapped source column names.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
clear_statistics(table_name, column_name, options, callback) → {Object}
Clears statistics (cardinality, mean value, etc.) for a column in a
specified table.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of a table. Must be an existing table. |
column_name |
String
|
Name of the column in table_name
for which to clear statistics. The column must
be from an existing table. An empty string
clears statistics for all columns in the table. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
clear_statistics_request(request, callback) → {Object}
Clears statistics (cardinality, mean value, etc.) for a column in a
specified table.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
clear_table(table_name, authorization, options, callback) → {Object}
Clears (drops) one or all tables in the database cluster. The operation is
synchronous meaning that the table will be cleared before the function
returns. The response payload returns the status of the operation along with
the name of the table that was cleared.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be cleared. Must be an
existing table. Empty string clears all
available tables, though this behavior is be
prevented by default via gpudb.conf parameter
'disable_clear_all'. |
authorization |
String
|
No longer used. User can pass an empty
string. |
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
true and if the table specified in
table_name does not exist no error is
returned. If false and if the table
specified in table_name does not exist
then an error is returned.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
clear_table_monitor(topic_id, options, callback) → {Object}
Parameters:
Name |
Type |
Description |
topic_id |
String
|
The topic ID returned by
GPUdb#create_table_monitor . |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
clear_table_monitor_request(request, callback) → {Object}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
clear_table_request(request, callback) → {Object}
Clears (drops) one or all tables in the database cluster. The operation is
synchronous meaning that the table will be cleared before the function
returns. The response payload returns the status of the operation along with
the name of the table that was cleared.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
clear_trigger(trigger_id, options, callback) → {Object}
Clears or cancels the trigger identified by the specified handle. The output
returns the handle of the trigger cleared as well as indicating success or
failure of the trigger deactivation.
Parameters:
Name |
Type |
Description |
trigger_id |
String
|
ID for the trigger to be deactivated. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
clear_trigger_request(request, callback) → {Object}
Clears or cancels the trigger identified by the specified handle. The output
returns the handle of the trigger cleared as well as indicating success or
failure of the trigger deactivation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
collect_statistics(table_name, column_names, options, callback) → {Object}
Collect statistics for a column(s) in a specified table.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of a table. Must be an existing table. |
column_names |
Array.<String>
|
List of one or more column names in
table_name for which to collect
statistics (cardinality, mean value, etc.). |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
collect_statistics_request(request, callback) → {Object}
Collect statistics for a column(s) in a specified table.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_graph(graph_name, directed_graph, nodes, edges, weights, restrictions, options, callback) → {Object}
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph resource to generate. |
directed_graph |
Boolean
|
If set to true , the graph will
be directed. If set to false ,
the graph will not be directed. Consult Directed Graphs for more
details.
Supported values:
The default value is true. |
nodes |
Array.<String>
|
Nodes represent fundamental topological units of a
graph.
Nodes must be specified using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing column names,
e.g.,
'table.column AS NODE_ID', expressions, e.g.,
'ST_MAKEPOINT(column1, column2) AS NODE_WKTPOINT',
or constant values, e.g.,
'{9, 10, 11} AS NODE_ID'.
If using constant values in an identifier
combination, the number of values
specified must match across the combination. |
edges |
Array.<String>
|
Edges represent the required fundamental
topological unit of
a graph that typically connect nodes. Edges must be
specified using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing column names,
e.g.,
'table.column AS EDGE_ID', expressions, e.g.,
'SUBSTR(column, 1, 6) AS EDGE_NODE1_NAME', or
constant values, e.g.,
"{'family', 'coworker'} AS EDGE_LABEL".
If using constant values in an identifier
combination, the number of values
specified must match across the combination. |
weights |
Array.<String>
|
Weights represent a method of informing the graph
solver of
the cost of including a given edge in a solution.
Weights must be specified
using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing column
names, e.g.,
'table.column AS WEIGHTS_EDGE_ID', expressions,
e.g.,
'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED', or
constant values, e.g.,
'{4, 15} AS WEIGHTS_VALUESPECIFIED'.
If using constant values in an identifier
combination, the number of values specified
must match across the combination. |
restrictions |
Array.<String>
|
Restrictions represent a method of informing
the graph
solver which edges and/or nodes should be
ignored for the solution. Restrictions
must be specified using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing column
names, e.g.,
'table.column AS RESTRICTIONS_EDGE_ID',
expressions, e.g.,
'column/2 AS RESTRICTIONS_VALUECOMPARED', or
constant values, e.g.,
'{0, 0, 0, 1} AS
RESTRICTIONS_ONOFFCOMPARED'.
If using constant values in an identifier
combination, the number of values
specified must match across the combination. |
options |
Object
|
Optional parameters.
- 'restriction_threshold_value':
Value-based restriction comparison. Any node or
edge with a RESTRICTIONS_VALUECOMPARED value
greater than the
restriction_threshold_value will not
be included in the graph.
- 'merge_tolerance': If node geospatial
positions are input (e.g., WKTPOINT, X, Y),
determines the minimum separation allowed between
unique nodes. If nodes are within the tolerance of
each other, they will be merged as a single node.
The default value is '1.0E-4'.
- 'min_x': Minimum x (longitude) value
for spatial graph associations. The default value
is '-180.0'.
- 'max_x': Maximum x (longitude) value
for spatial graph associations. The default value
is '180.0'.
- 'min_y': Minimum y (latitude) value
for spatial graph associations. The default value
is '-90.0'.
- 'max_y': Maximum y (latitude) value
for spatial graph associations. The default value
is '90.0'.
- 'recreate': If set to
true and the graph (using
graph_name ) already exists, the graph
is deleted and recreated.
Supported values:
The default value is 'false'.
- 'modify': If set to
true ,
recreate is set to true ,
and the graph (specified using
graph_name ) already exists, the graph
is updated with the given components.
Supported values:
The default value is 'false'.
- 'export_create_results': If set to
true , returns the graph topology in
the response as arrays.
Supported values:
The default value is 'false'.
- 'enable_graph_draw': If set to
true , adds a 'EDGE_WKTLINE' column
identifier to the specified
graph_table so the graph can be viewed
via WMS; for social and non-geospatial graphs, the
'EDGE_WKTLINE' column identifier will be populated
with spatial coordinates derived from a flattening
layout algorithm so the graph can still be viewed.
Supported values:
The default value is 'false'.
- 'save_persist': If set to
true , the graph will be saved in the
persist directory (see the config
reference for more information). If set to
false , the graph will be removed when
the graph server is shutdown.
Supported values:
The default value is 'false'.
- 'sync_db': If set to
true
and save_persist is set to
true , the graph will be fully
reconstructed upon a database restart and be
updated to align with any source table(s) updates
made since the creation of the graph. If dynamic
graph updates upon table inserts are desired, use
add_table_monitor instead.
Supported values:
The default value is 'false'.
- 'add_table_monitor': Adds a table
monitor to every table used in the creation of the
graph; this table monitor will trigger the graph to
update dynamically upon inserts to the source
table(s). Note that upon database restart, if
save_persist is also set to
true , the graph will be fully
reconstructed and the table monitors will be
reattached. For more details on table monitors, see
GPUdb#create_table_monitor .
Supported values:
The default value is 'false'.
- 'graph_table': If specified, the
created graph is also created as a table with the
given name and following identifier columns:
'EDGE_ID', 'EDGE_NODE1_ID', 'EDGE_NODE2_ID'. If
left blank, no table is created. The default value
is ''.
- 'remove_label_only': When RESTRICTIONS
on labeled entities requested, if set to true this
will NOT delete the entity but only the label
associated with the entity. Otherwise (default),
it'll delete the label AND the entity.
Supported values:
The default value is 'false'.
- 'add_turns': Adds dummy 'pillowed'
edges around intersection nodes where there are
more than three edges so that additional weight
penalties can be imposed by the solve endpoints.
(increases the total number of edges).
Supported values:
The default value is 'false'.
- 'turn_angle': Value in degrees
modifies the thresholds for attributing right,
left, sharp turns, and intersections. It is the
vertical deviation angle from the incoming edge to
the intersection node. The larger the value, the
larger the threshold for sharp turns and
intersections; the smaller the value, the larger
the threshold for right and left turns; 0 <
turn_angle < 90. The default value is '60'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_graph_request(request, callback) → {Object}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_job(endpoint, request_encoding, data, data_str, options, callback) → {Object}
Create a job which will run asynchronously. The response returns a job ID,
which can be used to query the status and result of the job. The status and
the result of the job upon completion can be requested by
GPUdb#get_job
.
Parameters:
Name |
Type |
Description |
endpoint |
String
|
Indicates which endpoint to execute, e.g.
'/alter/table'. |
request_encoding |
String
|
The encoding of the request payload for
the job.
Supported values:
The default value is 'binary'. |
data |
String
|
Binary-encoded payload for the job to be run
asynchronously. The payload must contain the relevant
input parameters for the endpoint indicated in
endpoint . Please see the documentation
for the appropriate endpoint to see what values must
(or can) be specified. If this parameter is used,
then request_encoding must be
binary or snappy . |
data_str |
String
|
JSON-encoded payload for the job to be run
asynchronously. The payload must contain the
relevant input parameters for the endpoint
indicated in endpoint . Please see
the documentation for the appropriate endpoint to
see what values must (or can) be specified. If
this parameter is used, then
request_encoding must be
json . |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_job_request(request, callback) → {Object}
Create a job which will run asynchronously. The response returns a job ID,
which can be used to query the status and result of the job. The status and
the result of the job upon completion can be requested by
GPUdb#get_job
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_join_table(join_table_name, table_names, column_names, expressions, options, callback) → {Object}
Parameters:
Name |
Type |
Description |
join_table_name |
String
|
Name of the join table to be created. Has
the same naming restrictions as tables. |
table_names |
Array.<String>
|
The list of table names composing the join.
Corresponds to a SQL statement FROM clause. |
column_names |
Array.<String>
|
List of member table columns or column
expressions to be included in the join.
Columns can be prefixed with
'table_id.column_name', where 'table_id' is
the table name or alias. Columns can be
aliased via the syntax 'column_name as
alias'. Wild cards '*' can be used to
include all columns across member tables or
'table_id.*' for all of a single table's
columns. Columns and column expressions
composing the join must be uniquely named or
aliased--therefore, the '*' wild card cannot
be used if column names aren't unique across
all tables. |
expressions |
Array.<String>
|
An optional list of expressions to combine
and filter the joined tables. Corresponds to
a SQL statement WHERE clause. For details
see: expressions. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the join. If the
collection provided is non-existent, the collection
will be automatically created. If empty, then the
join will be at the top level. The default value
is ''.
- 'max_query_dimensions': Obsolete in
GPUdb v7.0
- 'optimize_lookups': Use more memory to
speed up the joining of tables.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL of the join table specified
in
join_table_name .
- 'view_id': view this projection is
part of. The default value is ''.
- 'no_count': return a count of 0 for
the join table for logging and for show_table.
optimization needed for large overlapped equi-join
stencils. The default value is 'false'.
- 'chunk_size': Maximum number of
records per joined-chunk for this table. Defaults
to the gpudb.conf file chunk size
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_join_table_request(request, callback) → {Object}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_materialized_view(table_name, options, callback) → {Object}
Initiates the process of creating a materialized view, reserving the view's
name to prevent other views or tables from being created with that name.
For materialized view details and examples, see Materialized
Views.
The response contains view_id
, which is used to tag each
subsequent operation (projection, union, aggregation, filter, or join) that
will compose the view.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be created that is the
top-level table of the materialized view. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created table will be a
top-level table.
- 'ttl': Sets the TTL of the table specified in
table_name .
- 'persist': If
true , then
the materialized view specified in
table_name will be persisted and will
not expire unless a ttl is specified.
If false , then the materialized view
will be an in-memory table and will expire unless a
ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'refresh_method': Method by which the
join can be refreshed when the data in underlying
member tables have changed.
Supported values:
- 'manual': Refresh only occurs when
manually requested by calling
GPUdb#alter_table with an 'action' of
'refresh'
- 'on_query': For future use.
- 'on_change': If possible,
incrementally refresh (refresh just those records
added) whenever an insert, update, delete or
refresh of input table is done. A full refresh is
done if an incremental refresh is not possible.
- 'periodic': Refresh table periodically
at rate specified by
refresh_period
The default value is 'manual'.
- 'refresh_period': When
refresh_method is
periodic , specifies the period in
seconds at which refresh occurs
- 'refresh_start_time': When
refresh_method is
periodic , specifies the first time at
which a refresh is to be done. Value is a datetime
string with format 'YYYY-MM-DD HH:MM:SS'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_materialized_view_request(request, callback) → {Object}
Initiates the process of creating a materialized view, reserving the view's
name to prevent other views or tables from being created with that name.
For materialized view details and examples, see Materialized
Views.
The response contains view_id
, which is used to tag each
subsequent operation (projection, union, aggregation, filter, or join) that
will compose the view.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_proc(proc_name, execution_mode, files, command, args, options, callback) → {Object}
Creates an instance (proc) of the user-defined function (UDF) specified by
the given command, options, and files, and makes it available for execution.
For details on UDFs, see:
User-Defined Functions
Parameters:
Name |
Type |
Description |
proc_name |
String
|
Name of the proc to be created. Must not be the
name of a currently existing proc. |
execution_mode |
String
|
The execution mode of the proc.
Supported values:
- 'distributed': Input table data
will be divided into data segments that are
distributed across all nodes in the cluster,
and the proc command will be invoked once
per data segment in parallel. Output table
data from each invocation will be saved to
the same node as the corresponding input
data.
- 'nondistributed': The proc
command will be invoked only once per
execution, and will not have access to any
input or output table data.
The default value is 'distributed'. |
files |
Object
|
A map of the files that make up the proc. The keys of
the map are file names, and the values are the binary
contents of the files. The file names may include
subdirectory names (e.g. 'subdir/file') but must not
resolve to a directory above the root for the proc. |
command |
String
|
The command (excluding arguments) that will be
invoked when the proc is executed. It will be
invoked from the directory containing the proc
files and may be any command that can
be resolved from that directory. It need not refer
to a file actually in that directory; for example,
it could be 'java' if the proc is a Java
application; however, any necessary external
programs must be preinstalled on every database
node. If the command refers to a file in that
directory, it must be preceded with './' as per
Linux convention. If not specified, and exactly one
file is provided in files , that file
will be invoked. |
args |
Array.<String>
|
An array of command-line arguments that will be
passed to command when the proc is
executed. |
options |
Object
|
Optional parameters.
- 'max_concurrency_per_node': The
maximum number of concurrent instances of the proc
that will be executed per node. 0 allows unlimited
concurrency. The default value is '0'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_proc_request(request, callback) → {Object}
Creates an instance (proc) of the user-defined function (UDF) specified by
the given command, options, and files, and makes it available for execution.
For details on UDFs, see:
User-Defined Functions
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_projection(table_name, projection_name, column_names, options, callback) → {Object}
Creates a new
projection of an existing table. A projection represents a
subset of the columns (potentially including derived columns) of a table.
For projection details and examples, see Projections. For
limitations, see Projection Limitations and Cautions.
Window functions,
which can perform operations like moving averages, are available through
this endpoint as well as GPUdb#get_records_by_column
.
A projection can be created with a different shard key
than the source table. By specifying shard_key
, the projection
will be sharded according to the specified columns, regardless of how the
source table is sharded. The source table can even be unsharded or
replicated.
If table_name
is empty, selection is performed against a
single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the existing table on which the
projection is to be applied. An empty table
name creates a projection from a single-row
virtual table, where columns specified should be
constants or constant expressions. |
projection_name |
String
|
Name of the projection to be created. Has
the same naming restrictions as tables. |
column_names |
Array.<String>
|
List of columns from table_name
to be included in the projection. Can
include derived columns. Can be specified as
aliased via the syntax 'column_name as
alias'. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a collection to which the
projection is to be assigned as a child. If the
collection provided is non-existent, the collection
will be automatically created. If empty, then the
projection will be at the top level. The default
value is ''.
- 'expression': An optional filter expression to be applied to the
source table prior to the projection. The default
value is ''.
- 'is_replicated': If
true
then the projection will be replicated even if the
source table is not.
Supported values:
The default value is 'false'.
- 'limit': The number of records to
keep. The default value is ''.
- 'order_by': Comma-separated list of
the columns to be sorted by; e.g. 'timestamp asc, x
desc'. The columns specified must be present in
column_names . If any alias is given
for any column name, the alias must be used, rather
than the original column name. The default value
is ''.
- 'materialize_on_gpu': No longer used.
See Resource Management Concepts for
information about how resources are managed, Tier
Strategy Concepts for how resources are
targeted for VRAM, and Tier Strategy Usage for how to
specify a table's priority in VRAM.
Supported values:
The default value is 'false'.
- 'chunk_size': Indicates the number of
records per chunk to be used for this projection.
- 'create_indexes': Comma-separated list
of columns on which to create indexes on the
projection. The columns specified must be present
in
column_names . If any alias is
given for any column name, the alias must be used,
rather than the original column name.
- 'ttl': Sets the TTL of the projection specified
in
projection_name .
- 'shard_key': Comma-separated list of
the columns to be sharded on; e.g. 'column1,
column2'. The columns specified must be present in
column_names . If any alias is given
for any column name, the alias must be used, rather
than the original column name. The default value
is ''.
- 'persist': If
true , then
the projection specified in
projection_name will be persisted and
will not expire unless a ttl is
specified. If false , then the
projection will be an in-memory table and will
expire unless a ttl is specified
otherwise.
Supported values:
The default value is 'false'.
- 'preserve_dict_encoding': If
true , then columns that were dict
encoded in the source table will be dict encoded in
the projection.
Supported values:
The default value is 'true'.
- 'retain_partitions': Determines
whether the created projection will retain the
partitioning scheme from the source table.
Supported values:
The default value is 'false'.
- 'view_id': ID of view of which this
projection is a member. The default value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_projection_request(request, callback) → {Object}
Creates a new
projection of an existing table. A projection represents a
subset of the columns (potentially including derived columns) of a table.
For projection details and examples, see Projections. For
limitations, see Projection Limitations and Cautions.
Window functions,
which can perform operations like moving averages, are available through
this endpoint as well as GPUdb#get_records_by_column
.
A projection can be created with a different shard key
than the source table. By specifying shard_key
, the projection
will be sharded according to the specified columns, regardless of how the
source table is sharded. The source table can even be unsharded or
replicated.
If table_name
is empty, selection is performed against a
single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_resource_group(name, tier_attributes, ranking, adjoining_resource_group, options, callback) → {Object}
Creates a new resource group to facilitate resource management.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the group to be created. Must contain only
letters, digits, and underscores, and cannot begin
with a digit. Must not match existing resource group
name. |
tier_attributes |
Object
|
Optional map containing tier names and
their respective attribute group limits.
The only valid attribute limit that can be
set is max_memory (in bytes) for the VRAM &
RAM tiers.
For instance, to set max VRAM capacity to
1GB and max RAM capacity to 10GB, use:
{'VRAM':{'max_memory':'1000000000'},
'RAM':{'max_memory':'10000000000'}}
- 'max_memory': Maximum amount
of memory usable in the given tier at one
time for this group.
|
ranking |
String
|
Indicates the relative ranking among existing
resource groups where this new resource group will
be placed. When using before or
after , specify which resource group
this one will be inserted before or after in
adjoining_resource_group .
Supported values:
- 'first'
- 'last'
- 'before'
- 'after'
|
adjoining_resource_group |
String
|
If ranking is
before or
after , this field
indicates the resource group
before or after which the current
group will be placed; otherwise,
leave blank. |
options |
Object
|
Optional parameters.
- 'max_cpu_concurrency': Maximum number
of simultaneous threads that will be used to
execute a request for this group.
- 'max_scheduling_priority': Maximum
priority of a scheduled task for this group.
- 'max_tier_priority': Maximum priority
of a tiered object for this group.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_resource_group_request(request, callback) → {Object}
Creates a new resource group to facilitate resource management.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_role(name, options, callback) → {Object}
Creates a new role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the role to be created. Must contain only
lowercase letters, digits, and underscores, and cannot
begin with a digit. Must not be the same name as an
existing user or role. |
options |
Object
|
Optional parameters.
- 'resource_group': Name of an existing
resource group to associate with this user
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_role_request(request, callback) → {Object}
Creates a new role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_table(table_name, type_id, options, callback) → {Object}
Creates a new table or collection. If a new table is being created,
the type of the table is given by
type_id
, which must be the ID
of
a currently registered type (i.e. one created via
GPUdb#create_type
). The
table will be created inside a collection if the option
collection_name
is specified. If that collection does
not already exist, it will be created.
To create a new collection, specify the name of the collection in
table_name
and set the is_collection
option to
true
; type_id
will be
ignored.
A table may optionally be designated to use a
replicated distribution scheme,
have foreign
keys to other
tables assigned, be assigned a
partitioning scheme, or have a
tier
strategy assigned.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be created. Error for
requests with existing table of the same name
and type ID may be suppressed by using the
no_error_if_exists option. See Tables for naming
restrictions. |
type_id |
String
|
ID of a currently registered type. All objects
added to the newly created table will be of this
type. Ignored if is_collection is
true . |
options |
Object
|
Optional parameters.
- 'no_error_if_exists': If
true , prevents an error from occurring
if the table already exists and is of the given
type. If a table with the same ID but a different
type exists, it is still an error.
Supported values:
The default value is 'false'.
- 'collection_name': Name of a
collection which is to contain the newly created
table. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created table will be a
top-level table.
- 'is_collection': Indicates whether the
new table to be created will be a collection.
Supported values:
The default value is 'false'.
- 'disallow_homogeneous_tables': No
longer supported; value will be ignored.
Supported values:
The default value is 'false'.
- 'is_replicated': For a table, affects
the distribution scheme for the
table's data. If true and the given type has no
explicit shard key defined, the table will
be replicated. If false, the table
will be sharded according to the shard
key specified in the given
type_id , or
randomly sharded, if no shard key
is specified. Note that a type containing a shard
key cannot be used to create a replicated table.
Supported values:
The default value is 'false'.
- 'foreign_keys': Semicolon-separated
list of foreign keys, of the format
'(source_column_name [, ...]) references
target_table_name(primary_key_column_name [, ...])
[as foreign_key_name]'.
- 'foreign_shard_key': Foreign shard key
of the format 'source_column references
shard_by_column from
target_table(primary_key_column)'.
- 'partition_type': Partitioning scheme to use.
Supported values:
- 'partition_keys': Comma-separated list
of partition keys, which are the columns or column
expressions by which records will be assigned to
partitions defined by
partition_definitions .
- 'partition_definitions':
Comma-separated list of partition definitions,
whose format depends on the choice of
partition_type . See range partitioning, interval partitioning, list partitioning, or hash partitioning for example
formats.
- 'is_automatic_partition': If true, a
new partition will be created for values which
don't fall into an existing partition. Currently
only supported for list partitions.
Supported values:
The default value is 'false'.
- 'ttl': For a table, sets the TTL of the table specified in
table_name .
- 'chunk_size': Indicates the number of
records per chunk to be used for this table.
- 'is_result_table': For a table,
indicates whether the table is an in-memory table.
A result table cannot contain store_only,
text_search, or string columns (charN columns are
acceptable), and it will not be retained if the
server is restarted.
Supported values:
The default value is 'false'.
- 'strategy_definition': The tier strategy for the table and
its columns. See tier strategy usage for format
and tier strategy examples for
examples.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_table_monitor(table_name, options, callback) → {Object}
Creates a monitor that watches for a single table modification event
type (insert, update, or delete) on a particular table (identified by
table_name
) and forwards event notifications to subscribers via
ZMQ.
After this call completes, subscribe to the returned
topic_id
on the
ZMQ table monitor port (default 9002). Each time an operation of the given
type
on the table completes, a multipart message is published for that topic; the
first part contains only the topic ID, and each subsequent part contains one
binary-encoded Avro object that corresponds to the event and can be decoded
using
type_schema
. The monitor will continue to run (regardless
of
whether or not there are any subscribers) until deactivated with
GPUdb#clear_table_monitor
.
For more information on table monitors, see
Table
Monitors.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to monitor. Must not refer to
a collection. |
options |
Object
|
Optional parameters.
- 'event': Type of modification event on
the target table to be monitored by this table
monitor.
Supported values:
- 'insert': Get notifications of new
record insertions. The new row images are forwarded
to the subscribers.
- 'update': Get notifications of update
operations. The modified row count information is
forwarded to the subscribers.
- 'delete': Get notifications of delete
operations. The deleted row count information is
forwarded to the subscribers.
The default value is 'insert'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_table_monitor_request(request, callback) → {Object}
Creates a monitor that watches for a single table modification event
type (insert, update, or delete) on a particular table (identified by
table_name
) and forwards event notifications to subscribers via
ZMQ.
After this call completes, subscribe to the returned
topic_id
on the
ZMQ table monitor port (default 9002). Each time an operation of the given
type
on the table completes, a multipart message is published for that topic; the
first part contains only the topic ID, and each subsequent part contains one
binary-encoded Avro object that corresponds to the event and can be decoded
using
type_schema
. The monitor will continue to run (regardless
of
whether or not there are any subscribers) until deactivated with
GPUdb#clear_table_monitor
.
For more information on table monitors, see
Table
Monitors.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_table_request(request, callback) → {Object}
Creates a new table or collection. If a new table is being created,
the type of the table is given by
type_id
, which must be the ID
of
a currently registered type (i.e. one created via
GPUdb#create_type
). The
table will be created inside a collection if the option
collection_name
is specified. If that collection does
not already exist, it will be created.
To create a new collection, specify the name of the collection in
table_name
and set the is_collection
option to
true
; type_id
will be
ignored.
A table may optionally be designated to use a
replicated distribution scheme,
have foreign
keys to other
tables assigned, be assigned a
partitioning scheme, or have a
tier
strategy assigned.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_trigger_by_area(request_id, table_names, x_column_name, x_vector, y_column_name, y_vector, options, callback) → {Object}
Sets up an area trigger mechanism for two column_names for one or more
tables. (This function is essentially the two-dimensional version of
GPUdb#create_trigger_by_range
.) Once the trigger has been
activated, any record added to the listed tables(s) via
GPUdb#insert_records
with the chosen columns' values falling
within the specified region will trip the trigger. All such records will be
queued at the trigger port (by default '9001' but able to be retrieved via
GPUdb#show_system_status
) for any listening client to collect.
Active triggers can be cancelled by using the
GPUdb#clear_trigger
endpoint or by clearing all relevant
tables.
The output returns the trigger handle as well as indicating success or
failure of the trigger activation.
Parameters:
Name |
Type |
Description |
request_id |
String
|
User-created ID for the trigger. The ID can be
alphanumeric, contain symbols, and must contain
at least one character. |
table_names |
Array.<String>
|
Names of the tables on which the trigger will
be activated and maintained. |
x_column_name |
String
|
Name of a numeric column on which the trigger
is activated. Usually 'x' for geospatial data
points. |
x_vector |
Array.<Number>
|
The respective coordinate values for the region
on which the trigger is activated. This usually
translates to the x-coordinates of a geospatial
region. |
y_column_name |
String
|
Name of a second numeric column on which the
trigger is activated. Usually 'y' for
geospatial data points. |
y_vector |
Array.<Number>
|
The respective coordinate values for the region
on which the trigger is activated. This usually
translates to the y-coordinates of a geospatial
region. Must be the same length as xvals. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_trigger_by_area_request(request, callback) → {Object}
Sets up an area trigger mechanism for two column_names for one or more
tables. (This function is essentially the two-dimensional version of
GPUdb#create_trigger_by_range
.) Once the trigger has been
activated, any record added to the listed tables(s) via
GPUdb#insert_records
with the chosen columns' values falling
within the specified region will trip the trigger. All such records will be
queued at the trigger port (by default '9001' but able to be retrieved via
GPUdb#show_system_status
) for any listening client to collect.
Active triggers can be cancelled by using the
GPUdb#clear_trigger
endpoint or by clearing all relevant
tables.
The output returns the trigger handle as well as indicating success or
failure of the trigger activation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_trigger_by_range(request_id, table_names, column_name, min, max, options, callback) → {Object}
Sets up a simple range trigger for a column_name for one or more tables.
Once the trigger has been activated, any record added to the listed
tables(s) via
GPUdb#insert_records
with the chosen
column_name's value falling within the specified range will trip the
trigger. All such records will be queued at the trigger port (by default
'9001' but able to be retrieved via
GPUdb#show_system_status
)
for any listening client to collect. Active triggers can be cancelled by
using the
GPUdb#clear_trigger
endpoint or by clearing all
relevant tables.
The output returns the trigger handle as well as indicating success or
failure of the trigger activation.
Parameters:
Name |
Type |
Description |
request_id |
String
|
User-created ID for the trigger. The ID can be
alphanumeric, contain symbols, and must contain
at least one character. |
table_names |
Array.<String>
|
Tables on which the trigger will be active. |
column_name |
String
|
Name of a numeric column_name on which the
trigger is activated. |
min |
Number
|
The lower bound (inclusive) for the trigger range. |
max |
Number
|
The upper bound (inclusive) for the trigger range. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_trigger_by_range_request(request, callback) → {Object}
Sets up a simple range trigger for a column_name for one or more tables.
Once the trigger has been activated, any record added to the listed
tables(s) via
GPUdb#insert_records
with the chosen
column_name's value falling within the specified range will trip the
trigger. All such records will be queued at the trigger port (by default
'9001' but able to be retrieved via
GPUdb#show_system_status
)
for any listening client to collect. Active triggers can be cancelled by
using the
GPUdb#clear_trigger
endpoint or by clearing all
relevant tables.
The output returns the trigger handle as well as indicating success or
failure of the trigger activation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_type(type_definition, label, properties, options, callback) → {Object}
Creates a new type describing the layout or schema of a table. The type
definition is a JSON string describing the fields (i.e. columns) of the
type. Each field consists of a name and a data type. Supported data types
are: double, float, int, long, string, and bytes. In addition one or more
properties can be specified for each column which customize the memory usage
and query availability of that column. Note that some properties are
mutually exclusive--i.e. they cannot be specified for any given column
simultaneously. One example of mutually exclusive properties are
data
and
store_only
.
A single primary key and/or single shard key can
be set across one or more columns. If a primary key is specified, then a
uniqueness constraint is enforced, in that only a single object can exist
with a given primary key. When inserting
data into a table with a primary key, depending on the parameters in the
request, incoming objects with primary key values that match existing
objects will either overwrite (i.e. update) the existing object or will be
skipped and not added into the set.
Example of a type definition with some of the parameters::
{"type":"record",
"name":"point",
"fields":[{"name":"msg_id","type":"string"},
{"name":"x","type":"double"},
{"name":"y","type":"double"},
{"name":"TIMESTAMP","type":"double"},
{"name":"source","type":"string"},
{"name":"group_id","type":"string"},
{"name":"OBJECT_ID","type":"string"}]
}
Properties::
{"group_id":["store_only"],
"msg_id":["store_only","text_search"]
}
Parameters:
Name |
Type |
Description |
type_definition |
String
|
a JSON string describing the columns of the
type to be registered. |
label |
String
|
A user-defined description string which can be used
to differentiate between tables and types with
otherwise identical schemas. |
properties |
Object
|
Each key-value pair specifies the properties to
use for a given column where the key is the
column name. All keys used must be relevant
column names for the given table. Specifying
any property overrides the default properties
for that column (which is based on the column's
data type).
Valid values are:
- 'data': Default property for all
numeric and string type columns; makes the
column available for GPU queries.
- 'text_search': Valid only for
'string' columns. Enables full text search for
string columns. Can be set independently of
data and store_only .
- 'store_only': Persist the column
value but do not make it available to queries
(e.g.
GPUdb#filter )-i.e. it is
mutually exclusive to the data
property. Any 'bytes' type column must have a
store_only property. This property
reduces system memory usage.
- 'disk_optimized': Works in
conjunction with the
data property
for string columns. This property reduces system
disk usage by disabling reverse string lookups.
Queries like GPUdb#filter ,
GPUdb#filter_by_list , and
GPUdb#filter_by_value work as
usual but GPUdb#aggregate_unique
and GPUdb#aggregate_group_by are
not allowed on columns with this property.
- 'timestamp': Valid only for 'long'
columns. Indicates that this field represents a
timestamp and will be provided in milliseconds
since the Unix epoch: 00:00:00 Jan 1 1970.
Dates represented by a timestamp must fall
between the year 1000 and the year 2900.
- 'ulong': Valid only for 'string'
columns. It represents an unsigned long integer
data type. The string can only be interpreted as
an unsigned long data type with minimum value of
zero, and maximum value of 18446744073709551615.
- 'decimal': Valid only for 'string'
columns. It represents a SQL type NUMERIC(19,
4) data type. There can be up to 15 digits
before the decimal point and up to four digits
in the fractional part. The value can be
positive or negative (indicated by a minus sign
at the beginning). This property is mutually
exclusive with the
text_search
property.
- 'date': Valid only for 'string'
columns. Indicates that this field represents a
date and will be provided in the format
'YYYY-MM-DD'. The allowable range is 1000-01-01
through 2900-01-01. This property is mutually
exclusive with the
text_search
property.
- 'time': Valid only for 'string'
columns. Indicates that this field represents a
time-of-day and will be provided in the format
'HH:MM:SS.mmm'. The allowable range is
00:00:00.000 through 23:59:59.999. This
property is mutually exclusive with the
text_search property.
- 'datetime': Valid only for 'string'
columns. Indicates that this field represents a
datetime and will be provided in the format
'YYYY-MM-DD HH:MM:SS.mmm'. The allowable range
is 1000-01-01 00:00:00.000 through 2900-01-01
23:59:59.999. This property is mutually
exclusive with the
text_search
property.
- 'char1': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 1 character.
- 'char2': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 2 characters.
- 'char4': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 4 characters.
- 'char8': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 8 characters.
- 'char16': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 16 characters.
- 'char32': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 32 characters.
- 'char64': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 64 characters.
- 'char128': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 128 characters.
- 'char256': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 256 characters.
- 'int8': This property provides
optimized memory and query performance for int
columns. Ints with this property must be between
-128 and +127 (inclusive)
- 'int16': This property provides
optimized memory and query performance for int
columns. Ints with this property must be between
-32768 and +32767 (inclusive)
- 'ipv4': This property provides
optimized memory, disk and query performance for
string columns representing IPv4 addresses (i.e.
192.168.1.1). Strings with this property must be
of the form: A.B.C.D where A, B, C and D are in
the range of 0-255.
- 'wkt': Valid only for 'string' and
'bytes' columns. Indicates that this field
contains geospatial geometry objects in
Well-Known Text (WKT) or Well-Known Binary (WKB)
format.
- 'primary_key': This property
indicates that this column will be part of (or
the entire) primary key.
- 'shard_key': This property
indicates that this column will be part of (or
the entire) shard key.
- 'nullable': This property indicates
that this column is nullable. However, setting
this property is insufficient for making the
column nullable. The user must declare the type
of the column as a union between its regular
type and 'null' in the avro schema for the
record type in
type_definition .
For example, if a column is of type integer and
is nullable, then the entry for the column in
the avro schema must be: ['int', 'null'].
The C++, C#, Java, and Python APIs have built-in
convenience for bypassing setting the avro
schema by hand. For those languages, one can
use this property as usual and not have to worry
about the avro schema for the record.
- 'dict': This property indicates
that this column should be dictionary encoded. It can
only be used in conjunction with restricted
string (charN), int, long or date columns.
Dictionary encoding is best for columns where
the cardinality (the number of unique values) is
expected to be low. This property can save a
large amount of memory.
- 'init_with_now': For 'date',
'time', 'datetime', or 'timestamp' column types,
replace empty strings and invalid timestamps
with 'NOW()' upon insert.
The default value is an empty dict ( {} ). |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_type_request(request, callback) → {Object}
Creates a new type describing the layout or schema of a table. The type
definition is a JSON string describing the fields (i.e. columns) of the
type. Each field consists of a name and a data type. Supported data types
are: double, float, int, long, string, and bytes. In addition one or more
properties can be specified for each column which customize the memory usage
and query availability of that column. Note that some properties are
mutually exclusive--i.e. they cannot be specified for any given column
simultaneously. One example of mutually exclusive properties are
data
and
store_only
.
A single primary key and/or single shard key can
be set across one or more columns. If a primary key is specified, then a
uniqueness constraint is enforced, in that only a single object can exist
with a given primary key. When inserting
data into a table with a primary key, depending on the parameters in the
request, incoming objects with primary key values that match existing
objects will either overwrite (i.e. update) the existing object or will be
skipped and not added into the set.
Example of a type definition with some of the parameters::
{"type":"record",
"name":"point",
"fields":[{"name":"msg_id","type":"string"},
{"name":"x","type":"double"},
{"name":"y","type":"double"},
{"name":"TIMESTAMP","type":"double"},
{"name":"source","type":"string"},
{"name":"group_id","type":"string"},
{"name":"OBJECT_ID","type":"string"}]
}
Properties::
{"group_id":["store_only"],
"msg_id":["store_only","text_search"]
}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_union(table_name, table_names, input_column_names, output_column_names, options, callback) → {Object}
Merges data from one or more tables with comparable data types into a new
table.
The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations,
see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples,
see Intersect.
For limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see
Except. For
limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views
on a single table, creates a single filtered view containing all of the
unique records across all of the given filtered data sets.
Non-charN 'string' and 'bytes' column types cannot be merged, nor can
columns marked as store-only.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be created. Has the same
naming restrictions as tables. |
table_names |
Array.<String>
|
The list of table names to merge. Must
contain the names of one or more existing
tables. |
input_column_names |
Array.<Array.<String>>
|
The list of columns from each of the
corresponding input tables. |
output_column_names |
Array.<String>
|
The list of names of the columns to
be stored in the output table. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the output table. If
the collection provided is non-existent, the
collection will be automatically created. If empty,
the output table will be a top-level table. The
default value is ''.
- 'materialize_on_gpu': No longer used.
See Resource Management Concepts for
information about how resources are managed, Tier
Strategy Concepts for how resources are
targeted for VRAM, and Tier Strategy Usage for how to
specify a table's priority in VRAM.
Supported values:
The default value is 'false'.
- 'mode': If
merge_views ,
then this operation will merge the provided views.
All table_names must be views from the
same underlying base table.
Supported values:
- 'union_all': Retains all rows from the
specified tables.
- 'union': Retains all unique rows from
the specified tables (synonym for
union_distinct ).
- 'union_distinct': Retains all unique
rows from the specified tables.
- 'except': Retains all unique rows from
the first table that do not appear in the second
table (only works on 2 tables).
- 'except_all': Retains all
rows(including duplicates) from the first table
that do not appear in the second table (only works
on 2 tables).
- 'intersect': Retains all unique rows
that appear in both of the specified tables (only
works on 2 tables).
- 'intersect_all': Retains all
rows(including duplicates) that appear in both of
the specified tables (only works on 2 tables).
- 'merge_views': Merge two or more views
(or views of views) of the same base data set into
a new view. If this mode is selected
input_column_names AND
output_column_names must be empty. The
resulting view would match the results of a SQL OR
operation, e.g., if filter 1 creates a view using
the expression 'x = 20' and filter 2 creates a view
using the expression 'x <= 10', then the merge
views operation creates a new view using the
expression 'x = 20 OR x <= 10'.
The default value is 'union_all'.
- 'chunk_size': Indicates the number of
records per chunk to be used for this output table.
- 'create_indexes': Comma-separated list
of columns on which to create indexes on the output
table. The columns specified must be present in
output_column_names .
- 'ttl': Sets the TTL of the output table specified
in
table_name .
- 'persist': If
true , then
the output table specified in
table_name will be persisted and will
not expire unless a ttl is specified.
If false , then the output table will
be an in-memory table and will expire unless a
ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'view_id': ID of view of which this
output table is a member. The default value is ''.
- 'force_replicated': If
true , then the output table specified
in table_name will be replicated even
if the source tables are not.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_union_request(request, callback) → {Object}
Merges data from one or more tables with comparable data types into a new
table.
The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations,
see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples,
see Intersect.
For limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see
Except. For
limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views
on a single table, creates a single filtered view containing all of the
unique records across all of the given filtered data sets.
Non-charN 'string' and 'bytes' column types cannot be merged, nor can
columns marked as store-only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_user_external(name, options, callback) → {Object}
Creates a new external user (a user whose credentials are managed by an
external LDAP).
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user to be created. Must exactly match the
user's name in the external LDAP, prefixed with a @.
Must not be the same name as an existing user. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_user_external_request(request, callback) → {Object}
Creates a new external user (a user whose credentials are managed by an
external LDAP).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_user_internal(name, password, options, callback) → {Object}
Creates a new internal user (a user whose credentials are managed by the
database system).
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user to be created. Must contain only
lowercase letters, digits, and underscores, and cannot
begin with a digit. Must not be the same name as an
existing user or role. |
password |
String
|
Initial password of the user to be created. May be
an empty string for no password. |
options |
Object
|
Optional parameters.
- 'resource_group': Name of an existing
resource group to associate with this user
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
create_user_internal_request(request, callback) → {Object}
Creates a new internal user (a user whose credentials are managed by the
database system).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
decode(o) → {Object|Array.<Object>}
Decodes a JSON string, or array of JSON strings, returned from GPUdb into
JSON object(s).
Parameters:
Name |
Type |
Description |
o |
String
|
Array.<String>
|
The JSON string(s) to decode. |
- Source:
Returns:
The decoded JSON object(s).
-
Type
-
Object
|
Array.<Object>
delete_graph(graph_name, options, callback) → {Object}
Deletes an existing graph from the graph server and/or persist.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph to be deleted. |
options |
Object
|
Optional parameters.
- 'delete_persist': If set to
true , the graph is removed from the
server and persist. If set to false ,
the graph is removed from the server but is left in
persist. The graph can be reloaded from persist if
it is recreated with the same 'graph_name'.
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_graph_request(request, callback) → {Object}
Deletes an existing graph from the graph server and/or persist.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_proc(proc_name, options, callback) → {Object}
Deletes a proc. Any currently running instances of the proc will be killed.
Parameters:
Name |
Type |
Description |
proc_name |
String
|
Name of the proc to be deleted. Must be the name
of a currently existing proc. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_proc_request(request, callback) → {Object}
Deletes a proc. Any currently running instances of the proc will be killed.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_records(table_name, expressions, options, callback) → {Object}
Deletes record(s) matching the provided criteria from the given table. The
record selection criteria can either be one or more
expressions
(matching multiple records), a single record
identified by record_id
options, or all records when using
delete_all_records
. Note that the three selection criteria are
mutually exclusive. This operation cannot be run on a collection or a view.
The operation is synchronous meaning that a response will not be available
until the request is completely processed and all the matching records are
deleted.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table from which to delete records.
The set must be a currently existing table and
not a collection or a view. |
expressions |
Array.<String>
|
A list of the actual predicates, one for each
select; format should follow the guidelines
provided here. Specifying one or
more expressions is mutually
exclusive to specifying
record_id in the
options . |
options |
Object
|
Optional parameters.
- 'global_expression': An optional
global expression to reduce the search space of the
expressions . The default value is ''.
- 'record_id': A record ID identifying a
single record, obtained at the time of
insertion of the record
or by calling
GPUdb#get_records_from_collection
with the *return_record_ids* option. This option
cannot be used to delete records from replicated tables.
- 'delete_all_records': If set to
true , all records in the table will be
deleted. If set to false , then the
option is effectively ignored.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_records_request(request, callback) → {Object}
Deletes record(s) matching the provided criteria from the given table. The
record selection criteria can either be one or more
expressions
(matching multiple records), a single record
identified by record_id
options, or all records when using
delete_all_records
. Note that the three selection criteria are
mutually exclusive. This operation cannot be run on a collection or a view.
The operation is synchronous meaning that a response will not be available
until the request is completely processed and all the matching records are
deleted.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_resource_group(name, options, callback) → {Object}
Deletes a resource group.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the resource group to be deleted. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_resource_group_request(request, callback) → {Object}
Deletes a resource group.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_role(name, options, callback) → {Object}
Deletes an existing role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the role to be deleted. Must be an existing
role. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_role_request(request, callback) → {Object}
Deletes an existing role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_user(name, options, callback) → {Object}
Deletes an existing user.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user to be deleted. Must be an existing
user. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
delete_user_request(request, callback) → {Object}
Deletes an existing user.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
execute_proc(proc_name, params, bin_params, input_table_names, input_column_names, output_table_names, options, callback) → {Object}
Executes a proc. This endpoint is asynchronous and does not wait for the
proc to complete before returning.
Parameters:
Name |
Type |
Description |
proc_name |
String
|
Name of the proc to execute. Must be the name of
a currently existing proc. |
params |
Object
|
A map containing named parameters to pass to the
proc. Each key/value pair specifies the name of a
parameter and its value. |
bin_params |
Object
|
A map containing named binary parameters to pass
to the proc. Each key/value pair specifies the
name of a parameter and its value. |
input_table_names |
Array.<String>
|
Names of the tables containing data to
be passed to the proc. Each name
specified must be the name of a
currently existing table. If no table
names are specified, no data will be
passed to the proc. |
input_column_names |
Object
|
Map of table names from
input_table_names to lists
of names of columns from those tables
that will be passed to the proc. Each
column name specified must be the name
of an existing column in the
corresponding table. If a table name
from input_table_names is
not included, all columns from that
table will be passed to the proc. |
output_table_names |
Array.<String>
|
Names of the tables to which output
data from the proc will be written. If
a specified table does not exist, it
will automatically be created with the
same schema as the corresponding table
(by order) from
input_table_names ,
excluding any primary and shard keys.
If a specified table is a
non-persistent result table, it must
not have primary or shard keys. If no
table names are specified, no output
data can be returned from the proc. |
options |
Object
|
Optional parameters.
- 'cache_input': A comma-delimited list
of table names from
input_table_names
from which input data will be cached for use in
subsequent calls to
GPUdb#execute_proc with the
use_cached_input option. Cached input
data will be retained until the proc status is
cleared with the
clear_complete
option of GPUdb#show_proc_status and
all proc instances using the cached data have
completed. The default value is ''.
- 'use_cached_input': A comma-delimited
list of run IDs (as returned from prior calls to
GPUdb#execute_proc ) of running or
completed proc instances from which input data
cached using the cache_input option
will be used. Cached input data will not be used
for any tables specified in
input_table_names , but data from all
other tables cached for the specified run IDs will
be passed to the proc. If the same table was cached
for multiple specified run IDs, the cached data
from the first run ID specified in the list that
includes that table will be used. The default
value is ''.
- 'kifs_input_dirs': A comma-delimited
list of KiFS directories whose local files will be
made directly accessible to the proc through the
API. (All KiFS files, local or not, are also
accessible through the file system below the KiFS
mount point.) Each name specified must the name of
an existing KiFS directory. The default value is
''.
- 'run_tag': A string that, if not
empty, can be used in subsequent calls to
GPUdb#show_proc_status or
GPUdb#kill_proc to identify the proc
instance. The default value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
execute_proc_request(request, callback) → {Object}
Executes a proc. This endpoint is asynchronous and does not wait for the
proc to complete before returning.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
execute_sql(statement, offset, limit, request_schema_str, data, options, callback) → {Object}
SQL Request
Parameters:
Name |
Type |
Description |
statement |
String
|
SQL statement (query, DML, or DDL) to be executed |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to
indicate that the maximum number of results allowed
by the server should be returned. The number of
records returned will never exceed the server's own
limit, defined by the max_get_records_size parameter in
the server configuration. Use
has_more_records to see if more records
exist in the result to be fetched, and
offset & limit to request
subsequent pages of results. |
request_schema_str |
String
|
Avro schema of data . |
data |
Array.<String>
|
An array of binary-encoded data for the records to
be binded to the SQL query. |
options |
Object
|
Optional parameters.
- 'parallel_execution': If
false , disables the parallel step
execution of the given query.
Supported values:
The default value is 'true'.
- 'cost_based_optimization': If
false , disables the cost-based
optimization of the given query.
Supported values:
The default value is 'false'.
- 'plan_cache': If
false ,
disables plan caching for the given query.
Supported values:
The default value is 'true'.
- 'rule_based_optimization': If
false , disables rule-based rewrite
optimizations for the given query
Supported values:
The default value is 'true'.
- 'results_caching': If
false , disables caching of the results
of the given query
Supported values:
The default value is 'true'.
- 'paging_table': When empty or the
specified paging table not exists, the system will
create a paging table and return when query output
has more records than the user asked. If the paging
table exists in the system, the records from the
paging table are returned without evaluating the
query.
- 'paging_table_ttl': Sets the TTL of the paging table.
- 'distributed_joins': If
true , enables the use of distributed
joins in servicing the given query. Any query
requiring a distributed join will succeed, though
hints can be used in the query to change the
distribution of the source data to allow the query
to succeed.
Supported values:
The default value is 'false'.
- 'distributed_operations': If
true , enables the use of distributed
operations in servicing the given query. Any query
requiring a distributed join will succeed, though
hints can be used in the query to change the
distribution of the source data to allow the query
to succeed.
Supported values:
The default value is 'false'.
- 'ssq_optimization': If
false , scalar subqueries will be
translated into joins
Supported values:
The default value is 'true'.
- 'late_materialization': If
true , Joins/Filters results will
always be materialized ( saved to result tables
format)
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL of the intermediate result
tables used in query execution.
- 'update_on_existing_pk': Can be used
to customize behavior when the updated primary key
value already exists as described in
GPUdb#insert_records .
Supported values:
The default value is 'false'.
- 'preserve_dict_encoding': If
true , then columns that were dict
encoded in the source table will be dict encoded in
the projection table.
Supported values:
The default value is 'true'.
- 'validate_change_column': When
changing a column using alter table, validate the
change before applying it. If
true ,
then validate all values. A value too large (or too
long) for the new type will prevent any change. If
false , then when a value is too large
or long, it will be truncated.
Supported values:
- 'true': true
- 'false': false
The default value is 'true'.
- 'prepare_mode': If
true ,
compiles a query into an execution plan and saves
it in query cache. Query execution is not performed
and an empty response will be returned to user
Supported values:
The default value is 'false'.
- 'view_id':
The default value is ''.
- 'no_count':
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
execute_sql_request(request, callback) → {Object}
SQL Request
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter(table_name, view_name, expression, options, callback) → {Object}
Filters data based on the specified expression. The results are stored in a
result set
with the given
view_name
.
For details see Expressions.
The response message contains the number of points for which the expression
evaluated to be true, which is equivalent to the size of the result view.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to filter. This may be the
name of a collection, a table, or a view (when
chaining queries). If filtering a collection,
all child tables where the filter expression is
valid will be filtered; the filtered result
tables will then be placed in a collection
specified by view_name . |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Has the same naming
restrictions as tables. |
expression |
String
|
The select expression to filter the specified
table. For details see Expressions. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
- 'view_id': view this filtered-view is
part of. The default value is ''.
- 'ttl': Sets the TTL of the view specified in
view_name .
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_area(table_name, view_name, x_column_name, x_vector, y_column_name, y_vector, options, callback) → {Object}
Calculates which objects from a table are within a named area of interest
(NAI/polygon). The operation is synchronous, meaning that a response will
not be returned until all the matching objects are fully available. The
response payload provides the count of the resulting set. A new resultant
set (view) which satisfies the input NAI restriction specification is
created with the name view_name
passed in as part of the input.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to filter. This may be the
name of a collection, a table, or a view (when
chaining queries). If filtering a collection,
all child tables where the filter expression is
valid will be filtered; the filtered result
tables will then be placed in a collection
specified by view_name . |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Has the same naming
restrictions as tables. |
x_column_name |
String
|
Name of the column containing the x values to
be filtered. |
x_vector |
Array.<Number>
|
List of x coordinates of the vertices of the
polygon representing the area to be filtered. |
y_column_name |
String
|
Name of the column containing the y values to
be filtered. |
y_vector |
Array.<Number>
|
List of y coordinates of the vertices of the
polygon representing the area to be filtered. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_area_geometry(table_name, view_name, column_name, x_vector, y_vector, options, callback) → {Object}
Calculates which geospatial geometry objects from a table intersect a named
area of interest (NAI/polygon). The operation is synchronous, meaning that a
response will not be returned until all the matching objects are fully
available. The response payload provides the count of the resulting set. A
new resultant set (view) which satisfies the input NAI restriction
specification is created with the name view_name
passed in as
part of the input.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to filter. This may be the
name of a collection, a table, or a view (when
chaining queries). If filtering a collection,
all child tables where the filter expression is
valid will be filtered; the filtered result
tables will then be placed in a collection
specified by view_name . |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Must not be an
already existing collection, table or view. |
column_name |
String
|
Name of the geospatial geometry column to be
filtered. |
x_vector |
Array.<Number>
|
List of x coordinates of the vertices of the
polygon representing the area to be filtered. |
y_vector |
Array.<Number>
|
List of y coordinates of the vertices of the
polygon representing the area to be filtered. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_area_geometry_request(request, callback) → {Object}
Calculates which geospatial geometry objects from a table intersect a named
area of interest (NAI/polygon). The operation is synchronous, meaning that a
response will not be returned until all the matching objects are fully
available. The response payload provides the count of the resulting set. A
new resultant set (view) which satisfies the input NAI restriction
specification is created with the name view_name
passed in as
part of the input.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_area_request(request, callback) → {Object}
Calculates which objects from a table are within a named area of interest
(NAI/polygon). The operation is synchronous, meaning that a response will
not be returned until all the matching objects are fully available. The
response payload provides the count of the resulting set. A new resultant
set (view) which satisfies the input NAI restriction specification is
created with the name view_name
passed in as part of the input.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_box(table_name, view_name, x_column_name, min_x, max_x, y_column_name, min_y, max_y, options, callback) → {Object}
Calculates how many objects within the given table lie in a rectangular box.
The operation is synchronous, meaning that a response will not be returned
until all the objects are fully available. The response payload provides the
count of the resulting set. A new resultant set which satisfies the input
NAI restriction specification is also created when a view_name
is passed in as part of the input payload.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the bounding box
operation will be performed. Must be an existing
table. |
view_name |
String
|
Optional name of the result view that will be
created containing the results of the query. Has
the same naming restrictions as tables. |
x_column_name |
String
|
Name of the column on which to perform the
bounding box query. Must be a valid numeric
column. |
min_x |
Number
|
Lower bound for the column chosen by
x_column_name . Must be less than or
equal to max_x . |
max_x |
Number
|
Upper bound for x_column_name . Must be
greater than or equal to min_x . |
y_column_name |
String
|
Name of a column on which to perform the
bounding box query. Must be a valid numeric
column. |
min_y |
Number
|
Lower bound for y_column_name . Must be
less than or equal to max_y . |
max_y |
Number
|
Upper bound for y_column_name . Must be
greater than or equal to min_y . |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_box_geometry(table_name, view_name, column_name, min_x, max_x, min_y, max_y, options, callback) → {Object}
Calculates which geospatial geometry objects from a table intersect a
rectangular box. The operation is synchronous, meaning that a response will
not be returned until all the objects are fully available. The response
payload provides the count of the resulting set. A new resultant set which
satisfies the input NAI restriction specification is also created when a
view_name
is passed in as part of the input payload.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the bounding box
operation will be performed. Must be an existing
table. |
view_name |
String
|
Optional name of the result view that will be
created containing the results of the query. Must
not be an already existing collection, table or
view. |
column_name |
String
|
Name of the geospatial geometry column to be
filtered. |
min_x |
Number
|
Lower bound for the x-coordinate of the rectangular
box. Must be less than or equal to
max_x . |
max_x |
Number
|
Upper bound for the x-coordinate of the rectangular
box. Must be greater than or equal to
min_x . |
min_y |
Number
|
Lower bound for the y-coordinate of the rectangular
box. Must be less than or equal to
max_y . |
max_y |
Number
|
Upper bound for the y-coordinate of the rectangular
box. Must be greater than or equal to
min_y . |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_box_geometry_request(request, callback) → {Object}
Calculates which geospatial geometry objects from a table intersect a
rectangular box. The operation is synchronous, meaning that a response will
not be returned until all the objects are fully available. The response
payload provides the count of the resulting set. A new resultant set which
satisfies the input NAI restriction specification is also created when a
view_name
is passed in as part of the input payload.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_box_request(request, callback) → {Object}
Calculates how many objects within the given table lie in a rectangular box.
The operation is synchronous, meaning that a response will not be returned
until all the objects are fully available. The response payload provides the
count of the resulting set. A new resultant set which satisfies the input
NAI restriction specification is also created when a view_name
is passed in as part of the input payload.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_geometry(table_name, view_name, column_name, input_wkt, operation, options, callback) → {Object}
Applies a geometry filter against a geospatial geometry column in a given
table, collection or view. The filtering geometry is provided by
input_wkt
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter by
geometry will be performed. Must be an existing
table, collection or view containing a
geospatial geometry column. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Has the same naming
restrictions as tables. |
column_name |
String
|
Name of the column to be used in the filter.
Must be a geospatial geometry column. |
input_wkt |
String
|
A geometry in WKT format that will be used to
filter the objects in table_name . |
operation |
String
|
The geometric filtering operation to perform
Supported values:
- 'contains': Matches records that
contain the given WKT in
input_wkt ,
i.e. the given WKT is within the bounds of a
record's geometry.
- 'crosses': Matches records that
cross the given WKT.
- 'disjoint': Matches records that are
disjoint from the given WKT.
- 'equals': Matches records that are
the same as the given WKT.
- 'intersects': Matches records that
intersect the given WKT.
- 'overlaps': Matches records that
overlap the given WKT.
- 'touches': Matches records that
touch the given WKT.
- 'within': Matches records that are
within the given WKT.
|
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_geometry_request(request, callback) → {Object}
Applies a geometry filter against a geospatial geometry column in a given
table, collection or view. The filtering geometry is provided by
input_wkt
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_list(table_name, view_name, column_values_map, options, callback) → {Object}
Calculates which records from a table have values in the given list for the
corresponding column. The operation is synchronous, meaning that a response
will not be returned until all the objects are fully available. The response
payload provides the count of the resulting set. A new resultant set (view)
which satisfies the input filter specification is also created if a
view_name
is passed in as part of the request.
For example, if a type definition has the columns 'x' and 'y', then a filter
by list query with the column map {"x":["10.1", "2.3"], "y":["0.0", "-31.5",
"42.0"]} will return the count of all data points whose x and y values match
both in the respective x- and y-lists, e.g., "x = 10.1 and y = 0.0", "x =
2.3 and y = -31.5", etc. However, a record with "x = 10.1 and y = -31.5" or
"x = 2.3 and y = 0.0" would not be returned because the values in the given
lists do not correspond.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to filter. This may be the
name of a collection, a table, or a view (when
chaining queries). If filtering a collection,
all child tables where the filter expression is
valid will be filtered; the filtered result
tables will then be placed in a collection
specified by view_name . |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Has the same naming
restrictions as tables. |
column_values_map |
Object
|
List of values for the corresponding
column in the table |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
- 'filter_mode': String indicating the
filter mode, either 'in_list' or 'not_in_list'.
Supported values:
- 'in_list': The filter will match all
items that are in the provided list(s).
- 'not_in_list': The filter will match
all items that are not in the provided list(s).
The default value is 'in_list'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_list_request(request, callback) → {Object}
Calculates which records from a table have values in the given list for the
corresponding column. The operation is synchronous, meaning that a response
will not be returned until all the objects are fully available. The response
payload provides the count of the resulting set. A new resultant set (view)
which satisfies the input filter specification is also created if a
view_name
is passed in as part of the request.
For example, if a type definition has the columns 'x' and 'y', then a filter
by list query with the column map {"x":["10.1", "2.3"], "y":["0.0", "-31.5",
"42.0"]} will return the count of all data points whose x and y values match
both in the respective x- and y-lists, e.g., "x = 10.1 and y = 0.0", "x =
2.3 and y = -31.5", etc. However, a record with "x = 10.1 and y = -31.5" or
"x = 2.3 and y = 0.0" would not be returned because the values in the given
lists do not correspond.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_radius(table_name, view_name, x_column_name, x_center, y_column_name, y_center, radius, options, callback) → {Object}
Calculates which objects from a table lie within a circle with the given
radius and center point (i.e. circular NAI). The operation is synchronous,
meaning that a response will not be returned until all the objects are fully
available. The response payload provides the count of the resulting set. A
new resultant set (view) which satisfies the input circular NAI restriction
specification is also created if a
view_name
is passed in as
part of the request.
For track data, all track points that lie within the circle plus one point
on either side of the circle (if the track goes beyond the circle) will be
included in the result.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter by radius
operation will be performed. Must be an
existing table. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Has the same naming
restrictions as tables. |
x_column_name |
String
|
Name of the column to be used for the
x-coordinate (the longitude) of the center. |
x_center |
Number
|
Value of the longitude of the center. Must be
within [-180.0, 180.0]. |
y_column_name |
String
|
Name of the column to be used for the
y-coordinate-the latitude-of the center. |
y_center |
Number
|
Value of the latitude of the center. Must be
within [-90.0, 90.0]. |
radius |
Number
|
The radius of the circle within which the search
will be performed. Must be a non-zero positive
value. It is in meters; so, for example, a value of
'42000' means 42 km. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_radius_geometry(table_name, view_name, column_name, x_center, y_center, radius, options, callback) → {Object}
Calculates which geospatial geometry objects from a table intersect a circle
with the given radius and center point (i.e. circular NAI). The operation is
synchronous, meaning that a response will not be returned until all the
objects are fully available. The response payload provides the count of the
resulting set. A new resultant set (view) which satisfies the input circular
NAI restriction specification is also created if a view_name
is
passed in as part of the request.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter by radius
operation will be performed. Must be an
existing table. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Must not be an
already existing collection, table or view. |
column_name |
String
|
Name of the geospatial geometry column to be
filtered. |
x_center |
Number
|
Value of the longitude of the center. Must be
within [-180.0, 180.0]. |
y_center |
Number
|
Value of the latitude of the center. Must be
within [-90.0, 90.0]. |
radius |
Number
|
The radius of the circle within which the search
will be performed. Must be a non-zero positive
value. It is in meters; so, for example, a value of
'42000' means 42 km. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_radius_geometry_request(request, callback) → {Object}
Calculates which geospatial geometry objects from a table intersect a circle
with the given radius and center point (i.e. circular NAI). The operation is
synchronous, meaning that a response will not be returned until all the
objects are fully available. The response payload provides the count of the
resulting set. A new resultant set (view) which satisfies the input circular
NAI restriction specification is also created if a view_name
is
passed in as part of the request.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_radius_request(request, callback) → {Object}
Calculates which objects from a table lie within a circle with the given
radius and center point (i.e. circular NAI). The operation is synchronous,
meaning that a response will not be returned until all the objects are fully
available. The response payload provides the count of the resulting set. A
new resultant set (view) which satisfies the input circular NAI restriction
specification is also created if a
view_name
is passed in as
part of the request.
For track data, all track points that lie within the circle plus one point
on either side of the circle (if the track goes beyond the circle) will be
included in the result.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_range(table_name, view_name, column_name, lower_bound, upper_bound, options, callback) → {Object}
Calculates which objects from a table have a column that is within the given
bounds. An object from the table identified by
table_name
is
added to the view
view_name
if its column is within
[
lower_bound
,
upper_bound
] (inclusive). The
operation is synchronous. The response provides a count of the number of
objects which passed the bound filter. Although this functionality can also
be accomplished with the standard filter function, it is more efficient.
For track objects, the count reflects how many points fall within the given
bounds (which may not include all the track points of any given track).
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter by range
operation will be performed. Must be an
existing table. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Has the same naming
restrictions as tables. |
column_name |
String
|
Name of a column on which the operation would
be applied. |
lower_bound |
Number
|
Value of the lower bound (inclusive). |
upper_bound |
Number
|
Value of the upper bound (inclusive). |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_range_request(request, callback) → {Object}
Calculates which objects from a table have a column that is within the given
bounds. An object from the table identified by
table_name
is
added to the view
view_name
if its column is within
[
lower_bound
,
upper_bound
] (inclusive). The
operation is synchronous. The response provides a count of the number of
objects which passed the bound filter. Although this functionality can also
be accomplished with the standard filter function, it is more efficient.
For track objects, the count reflects how many points fall within the given
bounds (which may not include all the track points of any given track).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_series(table_name, view_name, track_id, target_track_ids, options, callback) → {Object}
Filters objects matching all points of the given track (works only on track
type data). It allows users to specify a particular track to find all other
points in the table that fall within specified ranges-spatial and
temporal-of all points of the given track. Additionally, the user can
specify another track to see if the two intersect (or go close to each other
within the specified ranges). The user also has the flexibility of using
different metrics for the spatial distance calculation: Euclidean (flat
geometry) or Great Circle (spherical geometry to approximate the Earth's
surface distances). The filtered points are stored in a newly created result
set. The return value of the function is the number of points in the
resultant set (view).
This operation is synchronous, meaning that a response will not be returned
until all the objects are fully available.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter by track
operation will be performed. Must be a currently
existing table with a track present. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Has the same naming
restrictions as tables. |
track_id |
String
|
The ID of the track which will act as the
filtering points. Must be an existing track within
the given table. |
target_track_ids |
Array.<String>
|
Up to one track ID to intersect with the
"filter" track. If any provided, it must
be an valid track ID within the given
set. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
- 'spatial_radius': A positive number
passed as a string representing the radius of the
search area centered around each track point's
geospatial coordinates. The value is interpreted in
meters. Required parameter.
- 'time_radius': A positive number
passed as a string representing the maximum
allowable time difference between the timestamps of
a filtered object and the given track's points. The
value is interpreted in seconds. Required
parameter.
- 'spatial_distance_metric': A string
representing the coordinate system to use for the
spatial search criteria. Acceptable values are
'euclidean' and 'great_circle'. Optional parameter;
default is 'euclidean'.
Supported values:
- 'euclidean'
- 'great_circle'
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_series_request(request, callback) → {Object}
Filters objects matching all points of the given track (works only on track
type data). It allows users to specify a particular track to find all other
points in the table that fall within specified ranges-spatial and
temporal-of all points of the given track. Additionally, the user can
specify another track to see if the two intersect (or go close to each other
within the specified ranges). The user also has the flexibility of using
different metrics for the spatial distance calculation: Euclidean (flat
geometry) or Great Circle (spherical geometry to approximate the Earth's
surface distances). The filtered points are stored in a newly created result
set. The return value of the function is the number of points in the
resultant set (view).
This operation is synchronous, meaning that a response will not be returned
until all the objects are fully available.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_string(table_name, view_name, expression, mode, column_names, options, callback) → {Object}
Calculates which objects from a table, collection, or view match a string
expression for the given string columns. The options 'case_sensitive' can be
used to modify the behavior for all modes except 'search'. For 'search' mode
details and limitations, see
Full Text Search.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter operation
will be performed. Must be an existing table,
collection or view. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Has the same naming
restrictions as tables. |
expression |
String
|
The expression with which to filter the table. |
mode |
String
|
The string filtering mode to apply. See below for
details.
Supported values:
- 'search': Full text search query with
wildcards and boolean operators. Note that for this
mode, no column can be specified in
column_names ; all string columns of the
table that have text search enabled will be searched.
- 'equals': Exact whole-string match
(accelerated).
- 'contains': Partial substring match (not
accelerated). If the column is a string type
(non-charN) and the number of records is too large, it
will return 0.
- 'starts_with': Strings that start with
the given expression (not accelerated). If the column
is a string type (non-charN) and the number of records
is too large, it will return 0.
- 'regex': Full regular expression search
(not accelerated). If the column is a string type
(non-charN) and the number of records is too large, it
will return 0.
|
column_names |
Array.<String>
|
List of columns on which to apply the
filter. Ignored for 'search' mode. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
- 'case_sensitive': If 'false' then
string filtering will ignore case. Does not apply
to 'search' mode.
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_string_request(request, callback) → {Object}
Calculates which objects from a table, collection, or view match a string
expression for the given string columns. The options 'case_sensitive' can be
used to modify the behavior for all modes except 'search'. For 'search' mode
details and limitations, see
Full Text Search.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_table(table_name, view_name, column_name, source_table_name, source_table_column_name, options, callback) → {Object}
Filters objects in one table based on objects in another table. The user
must specify matching column types from the two tables (i.e. the target
table from which objects will be filtered and the source table based on
which the filter will be created); the column names need not be the same. If
a view_name
is specified, then the filtered objects will then
be put in a newly created view. The operation is synchronous, meaning that a
response will not be returned until all objects are fully available in the
result view. The return value contains the count (i.e. the size) of the
resulting view.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table whose data will be filtered.
Must be an existing table. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Has the same naming
restrictions as tables. |
column_name |
String
|
Name of the column by whose value the data will
be filtered from the table designated by
table_name . |
source_table_name |
String
|
Name of the table whose data will be
compared against in the table called
table_name . Must be an
existing table. |
source_table_column_name |
String
|
Name of the column in the
source_table_name
whose values will be used as the
filter for table
table_name . Must be a
geospatial geometry column if in
'spatial' mode; otherwise, Must
match the type of the
column_name . |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
- 'filter_mode': String indicating the
filter mode, either
in_table or
not_in_table .
Supported values:
- 'in_table'
- 'not_in_table'
The default value is 'in_table'.
- 'mode': Mode - should be either
spatial or normal .
Supported values:
The default value is 'normal'.
- 'buffer': Buffer size, in meters. Only
relevant for
spatial mode. The
default value is '0'.
- 'buffer_method': Method used to buffer
polygons. Only relevant for
spatial
mode.
Supported values:
- 'normal'
- 'geos': Use geos 1 edge per corner
algorithm
The default value is 'normal'.
- 'max_partition_size': Maximum number
of points in a partition. Only relevant for
spatial mode. The default value is
'0'.
- 'max_partition_score': Maximum number
of points * edges in a partition. Only relevant for
spatial mode. The default value is
'8000000'.
- 'x_column_name': Name of column
containing x value of point being filtered in
spatial mode. The default value is
'x'.
- 'y_column_name': Name of column
containing y value of point being filtered in
spatial mode. The default value is
'y'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_table_request(request, callback) → {Object}
Filters objects in one table based on objects in another table. The user
must specify matching column types from the two tables (i.e. the target
table from which objects will be filtered and the source table based on
which the filter will be created); the column names need not be the same. If
a view_name
is specified, then the filtered objects will then
be put in a newly created view. The operation is synchronous, meaning that a
response will not be returned until all objects are fully available in the
result view. The return value contains the count (i.e. the size) of the
resulting view.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_value(table_name, view_name, is_string, value, value_str, column_name, options, callback) → {Object}
Calculates which objects from a table has a particular value for a
particular column. The input parameters provide a way to specify either a
String or a Double valued column and a desired value for the column on which
the filter is performed. The operation is synchronous, meaning that a
response will not be returned until all the objects are fully available. The
response payload provides the count of the resulting set. A new result view
which satisfies the input filter restriction specification is also created
with a view name passed in as part of the input payload. Although this
functionality can also be accomplished with the standard filter function, it
is more efficient.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of an existing table on which to perform
the calculation. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results. Has the same naming
restrictions as tables. |
is_string |
Boolean
|
Indicates whether the value being searched for
is string or numeric. |
value |
Number
|
The value to search for. |
value_str |
String
|
The string value to search for. |
column_name |
String
|
Name of a column on which the filter by value
would be applied. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
view. If the collection provided is non-existent,
the collection will be automatically created. If
empty, then the newly created view will be
top-level.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_by_value_request(request, callback) → {Object}
Calculates which objects from a table has a particular value for a
particular column. The input parameters provide a way to specify either a
String or a Double valued column and a desired value for the column on which
the filter is performed. The operation is synchronous, meaning that a
response will not be returned until all the objects are fully available. The
response payload provides the count of the resulting set. A new result view
which satisfies the input filter restriction specification is also created
with a view name passed in as part of the input payload. Although this
functionality can also be accomplished with the standard filter function, it
is more efficient.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
filter_request(request, callback) → {Object}
Filters data based on the specified expression. The results are stored in a
result set
with the given
view_name
.
For details see Expressions.
The response message contains the number of points for which the expression
evaluated to be true, which is equivalent to the size of the result view.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
get_geo_json(table_name, offset, limit, options, callback) → {Object}
Retrieves records as a GeoJSON from a given table, optionally filtered by an expression
and/or sorted by a column. This operation can be performed on tables, views,
or on homogeneous collections (collections containing tables of all the same
type). Records can be returned encoded as binary, json or geojson.
This operation supports paging through the data via the offset
and limit
parameters. Note that when paging through a table, if
the table (or the underlying table in case of a view) is updated (records
are inserted, deleted or modified) the records retrieved may differ between
calls based on the updates applied.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table from which the records will be
fetched. Must be a table, view or homogeneous
collection. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned. Or END_OF_SET (-9999) to
indicate that the max number of results should be
returned. |
options |
Object
|
- 'expression': Optional filter
expression to apply to the table.
- 'fast_index_lookup': Indicates if
indexes should be used to perform the lookup for a
given expression if possible. Only applicable if
there is no sorting, the expression contains only
equivalence comparisons based on existing tables
indexes and the range of requested values is from
[0 to END_OF_SET].
Supported values:
The default value is 'true'.
- 'sort_by': Optional column that the
data should be sorted by. Empty by default (i.e. no
sorting is applied).
- 'sort_order': String indicating how
the returned values should be sorted - ascending or
descending. If sort_order is provided, sort_by has
to be provided.
Supported values:
The default value is 'ascending'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
The GeoJSON containing the requested records.
-
Type
-
Object
get_job(job_id, options, callback) → {Object}
Get the status and result of asynchronously running job. See the
GPUdb#create_job
for starting an asynchronous job. Some
fields of the response are filled only after the submitted job has finished
execution.
Parameters:
Name |
Type |
Description |
job_id |
Number
|
A unique identifier for the job whose status and
result is to be fetched. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
get_job_request(request, callback) → {Object}
Get the status and result of asynchronously running job. See the
GPUdb#create_job
for starting an asynchronous job. Some
fields of the response are filled only after the submitted job has finished
execution.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
get_records(table_name, offset, limit, options, callback) → {Object}
Retrieves records from a given table, optionally filtered by an expression
and/or sorted by a column. This operation can be performed on tables, views,
or on homogeneous collections (collections containing tables of all the same
type). Records can be returned encoded as binary, json or geojson.
This operation supports paging through the data via the offset
and limit
parameters. Note that when paging through a table, if
the table (or the underlying table in case of a view) is updated (records
are inserted, deleted or modified) the records retrieved may differ between
calls based on the updates applied.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table from which the records will be
fetched. Must be a table, view or homogeneous
collection. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned. Or END_OF_SET (-9999) to
indicate that the max number of results should be
returned. The number of records returned will never
exceed the server's own limit, defined by the max_get_records_size parameter in
the server configuration. Use
has_more_records to see if more records
exist in the result to be fetched, and
offset & limit to request
subsequent pages of results. |
options |
Object
|
- 'expression': Optional filter
expression to apply to the table.
- 'fast_index_lookup': Indicates if
indexes should be used to perform the lookup for a
given expression if possible. Only applicable if
there is no sorting, the expression contains only
equivalence comparisons based on existing tables
indexes and the range of requested values is from
[0 to END_OF_SET].
Supported values:
The default value is 'true'.
- 'sort_by': Optional column that the
data should be sorted by. Empty by default (i.e. no
sorting is applied).
- 'sort_order': String indicating how
the returned values should be sorted - ascending or
descending. If sort_order is provided, sort_by has
to be provided.
Supported values:
The default value is 'ascending'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
get_records_by_column(table_name, column_names, offset, limit, options, callback) → {Object}
For a given table, retrieves the values from the requested column(s). Maps
of column name to the array of values as well as the column data type are
returned. This endpoint supports pagination with the
offset
and
limit
parameters.
Window functions,
which can perform operations like moving averages, are available through
this endpoint as well as GPUdb#create_projection
.
When using pagination, if the table (or the underlying table in the case of
a view) is modified (records are inserted, updated, or deleted) during a
call to the endpoint, the records or values retrieved may differ between
calls based on the type of the update, e.g., the contiguity across pages
cannot be relied upon.
If table_name
is empty, selection is performed against a
single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see: dynamic schemas
documentation.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which this operation will
be performed. An empty table name retrieves one
record from a single-row virtual table, where
columns specified should be constants or
constant expressions. The table cannot be a
parent set. |
column_names |
Array.<String>
|
The list of column values to retrieve. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to
indicate that the maximum number of results allowed
by the server should be returned. The number of
records returned will never exceed the server's own
limit, defined by the max_get_records_size parameter in
the server configuration. Use
has_more_records to see if more records
exist in the result to be fetched, and
offset & limit to request
subsequent pages of results. |
options |
Object
|
- 'expression': Optional filter
expression to apply to the table.
- 'sort_by': Optional column that the
data should be sorted by. Used in conjunction with
sort_order . The order_by
option can be used in lieu of sort_by
/ sort_order . The default value is
''.
- 'sort_order': String indicating how
the returned values should be sorted -
ascending or descending .
If sort_order is provided,
sort_by has to be provided.
Supported values:
The default value is 'ascending'.
- 'order_by': Comma-separated list of
the columns to be sorted by as well as the sort
direction, e.g., 'timestamp asc, x desc'. The
default value is ''.
- 'convert_wkts_to_wkbs': If true, then
WKT string columns will be returned as WKB bytes.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
get_records_by_column_request(request, callback) → {Object}
For a given table, retrieves the values from the requested column(s). Maps
of column name to the array of values as well as the column data type are
returned. This endpoint supports pagination with the
offset
and
limit
parameters.
Window functions,
which can perform operations like moving averages, are available through
this endpoint as well as GPUdb#create_projection
.
When using pagination, if the table (or the underlying table in the case of
a view) is modified (records are inserted, updated, or deleted) during a
call to the endpoint, the records or values retrieved may differ between
calls based on the type of the update, e.g., the contiguity across pages
cannot be relied upon.
If table_name
is empty, selection is performed against a
single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see: dynamic schemas
documentation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
get_records_by_series(table_name, world_table_name, offset, limit, options, callback) → {Object}
Retrieves the complete series/track records from the given
world_table_name
based on the partial track information
contained in the
table_name
.
This operation supports paging through the data via the offset
and limit
parameters.
In contrast to GPUdb#get_records
this returns records grouped
by series/track. So if offset
is 0 and limit
is 5
this operation would return the first 5 series/tracks in
table_name
. Each series/track will be returned sorted by their
TIMESTAMP column.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the collection/table/view for which
series/tracks will be fetched. |
world_table_name |
String
|
Name of the table containing the complete
series/track information to be returned
for the tracks present in the
table_name . Typically this is
used when retrieving series/tracks from a
view (which contains partial
series/tracks) but the user wants to
retrieve the entire original
series/tracks. Can be blank. |
offset |
Number
|
A positive integer indicating the number of initial
series/tracks to skip (useful for paging through the
results). |
limit |
Number
|
A positive integer indicating the maximum number of
series/tracks to be returned. Or END_OF_SET (-9999)
to indicate that the max number of results should be
returned. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
get_records_by_series_request(request, callback) → {Object}
Retrieves the complete series/track records from the given
world_table_name
based on the partial track information
contained in the
table_name
.
This operation supports paging through the data via the offset
and limit
parameters.
In contrast to GPUdb#get_records
this returns records grouped
by series/track. So if offset
is 0 and limit
is 5
this operation would return the first 5 series/tracks in
table_name
. Each series/track will be returned sorted by their
TIMESTAMP column.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
get_records_from_collection(table_name, offset, limit, options, callback) → {Object}
Retrieves records from a collection. The operation can optionally return the
record IDs which can be used in certain queries such as
GPUdb#delete_records
.
This operation supports paging through the data via the offset
and limit
parameters.
Note that when using the Java API, it is not possible to retrieve records
from join tables using this operation.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the collection or table from which
records are to be retrieved. Must be an existing
collection or table. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned, or END_OF_SET (-9999) to
indicate that the max number of results should be
returned. The number of records returned will never
exceed the server's own limit, defined by the max_get_records_size parameter in
the server configuration. Use offset &
limit to request subsequent pages of
results. |
options |
Object
|
- 'return_record_ids': If 'true' then
return the internal record ID along with each
returned record. Default is 'false'.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
get_records_from_collection_request(request, callback) → {Object}
Retrieves records from a collection. The operation can optionally return the
record IDs which can be used in certain queries such as
GPUdb#delete_records
.
This operation supports paging through the data via the offset
and limit
parameters.
Note that when using the Java API, it is not possible to retrieve records
from join tables using this operation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
get_records_request(request, callback) → {Object}
Retrieves records from a given table, optionally filtered by an expression
and/or sorted by a column. This operation can be performed on tables, views,
or on homogeneous collections (collections containing tables of all the same
type). Records can be returned encoded as binary, json or geojson.
This operation supports paging through the data via the offset
and limit
parameters. Note that when paging through a table, if
the table (or the underlying table in case of a view) is updated (records
are inserted, deleted or modified) the records retrieved may differ between
calls based on the updates applied.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
grant_permission_proc(name, permission, proc_name, options, callback) → {Object}
Grants a proc-level permission to a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role to which the permission will
be granted. Must be an existing user or role. |
permission |
String
|
Permission to grant to the user or role.
Supported values:
- 'proc_execute': Execute access to
the proc.
|
proc_name |
String
|
Name of the proc to which the permission grants
access. Must be an existing proc, or an empty
string to grant access to all procs. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
grant_permission_proc_request(request, callback) → {Object}
Grants a proc-level permission to a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
grant_permission_system(name, permission, options, callback) → {Object}
Grants a system-level permission to a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role to which the permission will
be granted. Must be an existing user or role. |
permission |
String
|
Permission to grant to the user or role.
Supported values:
- 'system_admin': Full access to all
data and system functions.
- 'system_user_admin': Access to
administer users and roles that do not have
system_admin permission.
- 'system_write': Read and write
access to all tables.
- 'system_read': Read-only access to
all tables.
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
grant_permission_system_request(request, callback) → {Object}
Grants a system-level permission to a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
grant_permission_table(name, permission, table_name, filter_expression, options, callback) → {Object}
Grants a table-level permission to a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role to which the permission will
be granted. Must be an existing user or role. |
permission |
String
|
Permission to grant to the user or role.
Supported values:
- 'table_admin': Full read/write and
administrative access to the table.
- 'table_insert': Insert access to
the table.
- 'table_update': Update access to
the table.
- 'table_delete': Delete access to
the table.
- 'table_read': Read access to the
table.
|
table_name |
String
|
Name of the table to which the permission grants
access. Must be an existing table, collection,
or view. If a collection, the permission also
applies to tables and views in the collection. |
filter_expression |
String
|
Optional filter expression to apply to
this grant. Only rows that match the
filter will be affected. |
options |
Object
|
Optional parameters.
- 'columns': Apply security to these
columns, comma-separated. The default value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
grant_permission_table_request(request, callback) → {Object}
Grants a table-level permission to a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
grant_role(role, member, options, callback) → {Object}
Grants membership in a role to a user or role.
Parameters:
Name |
Type |
Description |
role |
String
|
Name of the role in which membership will be granted.
Must be an existing role. |
member |
String
|
Name of the user or role that will be granted
membership in role . Must be an existing
user or role. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
grant_role_request(request, callback) → {Object}
Grants membership in a role to a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
has_proc(proc_name, options, callback) → {Object}
Checks the existence of a proc with the given name.
Parameters:
Name |
Type |
Description |
proc_name |
String
|
Name of the proc to check for existence. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
has_proc_request(request, callback) → {Object}
Checks the existence of a proc with the given name.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
has_table(table_name, options, callback) → {Object}
Checks for the existence of a table with the given name.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to check for existence. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
has_table_request(request, callback) → {Object}
Checks for the existence of a table with the given name.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
has_type(type_id, options, callback) → {Object}
Check for the existence of a type.
Parameters:
Name |
Type |
Description |
type_id |
String
|
Id of the type returned in response to
GPUdb#create_type request. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
has_type_request(request, callback) → {Object}
Check for the existence of a type.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
insert_records(table_name, data, options, callback) → {Object}
Adds multiple records to the specified table. The operation is synchronous,
meaning that a response will not be returned until all the records are fully
inserted and available. The response payload provides the counts of the
number of records actually inserted and/or updated, and can provide the
unique identifier of each added record.
The options
parameter can be used to customize this function's
behavior.
The update_on_existing_pk
option specifies the record collision
policy for inserting into a table with a primary
key, but is ignored if no primary key exists.
The return_record_ids
option indicates that the database should
return the unique identifiers of inserted records.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Table to which the records are to be added. Must
be an existing table. |
data |
Array.<Object>
|
An array of JSON encoded data for the records to be
added. All records must be of the same type as that
of the table. Empty array if
list_encoding is binary . |
options |
Object
|
Optional parameters.
- 'update_on_existing_pk': Specifies the
record collision policy for inserting into a table
with a primary key. If set to
true , any existing table record with
primary key values that match those of a record
being inserted will be replaced by that new record.
If set to false , any existing table
record with primary key values that match those of
a record being inserted will remain unchanged and
the new record discarded. If the specified table
does not have a primary key, then this option is
ignored.
Supported values:
The default value is 'false'.
- 'return_record_ids': If
true then return the internal record
id along for each inserted record.
Supported values:
The default value is 'false'.
- 'truncate_strings': If set to
true , any strings which are too long
for their target charN string columns will be
truncated to fit.
Supported values:
The default value is 'false'.
- 'return_individual_errors': If set to
true , success will always be returned,
and any errors found will be included in the info
map. The "bad_record_indices" entry is a
comma-separated list of bad records (0-based). And
if so, there will also be an "error_N" entry for
each record with an error, where N is the index
(0-based).
Supported values:
The default value is 'false'.
- 'allow_partial_batch': If set to
true , all correct records will be
inserted and incorrect records will be rejected and
reported. Otherwise, the entire batch will be
rejected if any records are incorrect.
Supported values:
The default value is 'false'.
- 'dry_run': If set to
true , no data will be saved and any
errors will be returned.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
insert_records_from_files(table_name, filepaths, create_table_options, options, callback) → {Object}
Reads from one or more files located on the server and inserts the data into
a new or existing table.
For CSV files, there are two loading schemes: positional and name-based. The
name-based loading scheme is enabled when the file has a header present and
text_has_header
is set to true
. In this scheme,
the source file(s) field names must match the target table's column names
exactly; however, the source file can have more fields than the target table
has columns. If error_handling
is set to
permissive
, the source file can have fewer fields than the
target table has columns. If the name-based loading scheme is being used,
names matching the file header's names may be provided to
columns_to_load
instead of numbers, but ranges are not
supported.
Returns once all files are processed.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table into which the data will be
inserted. If the table does not exist, the table
will be created using either an existing
type_id or the type inferred from
the file. |
filepaths |
Array.<String>
|
Absolute or relative filepath(s) from where
files will be loaded. Relative filepaths are
relative to the defined external_files_directory
parameter in the server configuration. The
filepaths may include wildcards (*). If the
first path ends in .tsv, the text delimiter
will be defaulted to a tab character. If the
first path ends in .psv, the text delimiter
will be defaulted to a pipe character (|). |
create_table_options |
Object
|
Options used when creating a new
table.
- 'type_id': ID of a
currently registered type. The default
value is ''.
- 'no_error_if_exists': If
true , prevents an error
from occurring if the table already
exists and is of the given type. If a
table with the same ID but a different
type exists, it is still an error.
Supported values:
The default value is 'false'.
- 'collection_name': Name
of a collection which is to contain
the newly created table. If the
collection provided is non-existent,
the collection will be automatically
created. If empty, then the newly
created table will be a top-level
table.
- 'is_replicated': For a
table, affects the distribution scheme
for the table's data. If true and the
given type has no explicit shard key defined,
the table will be replicated. If
false, the table will be sharded according to
the shard key specified in the given
type_id , or randomly sharded, if
no shard key is specified. Note that
a type containing a shard key cannot
be used to create a replicated table.
Supported values:
The default value is 'false'.
- 'foreign_keys':
Semicolon-separated list of foreign keys, of the
format '(source_column_name [, ...])
references
target_table_name(primary_key_column_name
[, ...]) [as foreign_key_name]'.
- 'foreign_shard_key':
Foreign shard key of the format
'source_column references
shard_by_column from
target_table(primary_key_column)'.
- 'partition_type': Partitioning scheme
to use.
Supported values:
- 'partition_keys':
Comma-separated list of partition
keys, which are the columns or column
expressions by which records will be
assigned to partitions defined by
partition_definitions .
- 'partition_definitions':
Comma-separated list of partition
definitions, whose format depends on
the choice of
partition_type . See range partitioning,
interval
partitioning, list partitioning,
or hash partitioning
for example formats.
- 'is_automatic_partition':
If true, a new partition will be
created for values which don't fall
into an existing partition. Currently
only supported for list partitions.
Supported values:
The default value is 'false'.
- 'ttl': For a table, sets
the TTL of the table
specified in
table_name .
- 'chunk_size': Indicates
the number of records per chunk to be
used for this table.
- 'is_result_table': For a
table, indicates whether the table is
an in-memory table. A result table
cannot contain store_only,
text_search, or string columns (charN
columns are acceptable), and it will
not be retained if the server is
restarted.
Supported values:
The default value is 'false'.
- 'strategy_definition':
The tier strategy for
the table and its columns. See tier strategy usage
for format and tier strategy
examples for examples.
|
options |
Object
|
Optional parameters.
- 'batch_size': Specifies number of
records to process before inserting.
- 'column_formats': For each target
column specified, applies the column-property-bound
format to the source data loaded into that column.
Each column format will contain a mapping of one or
more of its column properties to an appropriate
format for each property. Currently supported
column properties include date, time, & datetime.
The parameter value must be formatted as a JSON
string of maps of column names to maps of column
properties to their corresponding column formats,
e.g., { "order_date" : { "date" : "%Y.%m.%d" },
"order_time" : { "time" : "%H:%M:%S" } }. See
default_column_formats for valid
format syntax.
- 'columns_to_load': For
delimited_text file_type
only. Specifies a comma-delimited list of column
positions or names to load instead of loading all
columns in the file(s); if more than one file is
being loaded, the list of columns will apply to all
files. Column numbers can be specified discretely
or as a range, e.g., a value of '5,7,1..3' will
create a table with the first column in the table
being the fifth column in the file, followed by
seventh column in the file, then the first column
through the fourth column in the file.
- 'default_column_formats': Specifies
the default format to be applied to source data
loaded into columns with the corresponding column
property. This default column-property-bound
format can be overridden by specifying a column
property & format for a given target column in
column_formats . For each specified
annotation, the format will apply to all columns
with that annotation unless a custom
column_formats for that annotation is
specified. The parameter value must be formatted as
a JSON string that is a map of column properties to
their respective column formats, e.g., { "date" :
"%Y.%m.%d", "time" : "%H:%M:%S" }. Column formats
are specified as a string of control characters and
plain text. The supported control characters are
'Y', 'm', 'd', 'H', 'M', 'S', and 's', which follow
the Linux 'strptime()' specification, as well as
's', which specifies seconds and fractional seconds
(though the fractional component will be truncated
past milliseconds). Formats for the 'date'
annotation must include the 'Y', 'm', and 'd'
control characters. Formats for the 'time'
annotation must include the 'H', 'M', and either
'S' or 's' (but not both) control characters.
Formats for the 'datetime' annotation meet both the
'date' and 'time' control character requirements.
For example, '{"datetime" : "%m/%d/%Y %H:%M:%S" }'
would be used to interpret text as "05/04/2000
12:12:11"
- 'dry_run': If set to
true , no data will be inserted but the
file will be read with the applied
error_handling mode and the number of
valid records that would be normally inserted are
returned.
Supported values:
The default value is 'false'.
- 'error_handling': Specifies how errors
should be handled upon insertion.
Supported values:
- 'permissive': Records with missing
columns are populated with nulls if possible;
otherwise, the malformed records are skipped.
- 'ignore_bad_records': Malformed
records are skipped.
- 'abort': Stops current insertion and
aborts entire operation when an error is
encountered.
The default value is 'Permissive'.
- 'file_type': File type for the
file(s).
Supported values:
- 'delimited_text': Indicates the
file(s) are in delimited text format, e.g., CSV,
TSV, PSV, etc.
The default value is 'delimited_text'.
- 'loading_mode': Specifies how to
divide data loading among nodes.
Supported values:
- 'head': The head node loads all data.
All files must be available on the head node.
- 'distributed_shared': The worker nodes
coordinate loading a set of files that are
available to all of them. All files must be
available on all nodes. This option is best when
there is a shared file system.
- 'distributed_local': Each worker node
loads all files that are available to it. This
option is best when each worker node has its own
file system.
The default value is 'head'.
- 'text_comment_string': For
delimited_text file_type
only. All lines in the file(s) starting with the
provided string are ignored. The comment string has
no effect unless it appears at the beginning of a
line. The default value is '#'.
- 'text_delimiter': For
delimited_text file_type
only. Specifies the delimiter for values and
columns in the header row (if present). Must be a
single character. The default value is ','.
- 'text_escape_character': For
delimited_text file_type
only. The character used in the file(s) to escape
certain character sequences in text. For example,
the escape character followed by a literal 'n'
escapes to a newline character within the field.
Can be used within quoted string to escape a quote
character. An empty value for this option does not
specify an escape character.
- 'text_has_header': For
delimited_text file_type
only. Indicates whether the delimited text files
have a header row.
Supported values:
The default value is 'true'.
- 'text_header_property_delimiter': For
delimited_text file_type
only. Specifies the delimiter for column properties
in the header row (if present). Cannot be set to
same value as text_delimiter. The default value is
'|'.
- 'text_null_string': For
delimited_text file_type
only. The value in the file(s) to treat as a null
value in the database. The default value is ''.
- 'text_quote_character': For
delimited_text file_type
only. The quote character used in the file(s),
typically encompassing a field value. The character
must appear at beginning and end of field to take
effect. Delimiters within quoted fields are not
treated as delimiters. Within a quoted field,
double quotes (") can be used to escape a single
literal quote character. To not have a quote
character, specify an empty string (""). The
default value is '"'.
- 'truncate_table': If set to
true , truncates the table specified by
table_name prior to loading the
file(s).
Supported values:
The default value is 'false'.
- 'num_tasks_per_rank': Optional: number
of tasks for reading file per rank. Default will be
external_file_reader_num_tasks
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
insert_records_from_files_request(request, callback) → {Object}
Reads from one or more files located on the server and inserts the data into
a new or existing table.
For CSV files, there are two loading schemes: positional and name-based. The
name-based loading scheme is enabled when the file has a header present and
text_has_header
is set to true
. In this scheme,
the source file(s) field names must match the target table's column names
exactly; however, the source file can have more fields than the target table
has columns. If error_handling
is set to
permissive
, the source file can have fewer fields than the
target table has columns. If the name-based loading scheme is being used,
names matching the file header's names may be provided to
columns_to_load
instead of numbers, but ranges are not
supported.
Returns once all files are processed.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
insert_records_random(table_name, count, options, callback) → {Object}
Generates a specified number of random records and adds them to the given
table. There is an optional parameter that allows the user to customize the
ranges of the column values. It also allows the user to specify linear
profiles for some or all columns in which case linear values are generated
rather than random ones. Only individual tables are supported for this
operation.
This operation is synchronous, meaning that a response will not be returned
until all random records are fully available.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Table to which random records will be added.
Must be an existing table. Also, must be an
individual table, not a collection of tables,
nor a view of a table. |
count |
Number
|
Number of records to generate. |
options |
Object
|
Optional parameter to pass in specifications for
the randomness of the values. This map is
different from the *options* parameter of most
other endpoints in that it is a map of string to
map of string to doubles, while most others are
maps of string to string. In this map, the top
level keys represent which column's parameters are
being specified, while the internal keys represents
which parameter is being specified. These
parameters take on different meanings depending on
the type of the column. Below follows a more
detailed description of the map:
- 'seed': If provided, the internal
random number generator will be initialized with
the given value. The minimum is 0. This allows
for the same set of random numbers to be generated
across invocation of this endpoint in case the user
wants to repeat the test. Since
options , is a map of maps, we need an
internal map to provide the seed value. For
example, to pass 100 as the seed value through this
parameter, you need something equivalent to:
'options' = {'seed': { 'value': 100 } }
- 'value': Pass the seed value here.
- 'all': This key indicates that the
specifications relayed in the internal map are to
be applied to all columns of the records.
- 'min': For numerical columns, the
minimum of the generated values is set to this
value. Default is -99999. For point, shape, and
track columns, min for numeric 'x' and 'y' columns
needs to be within [-180, 180] and [-90, 90],
respectively. The default minimum possible values
for these columns in such cases are -180.0 and
-90.0. For the 'TIMESTAMP' column, the default
minimum corresponds to Jan 1, 2010.
For string columns, the minimum length of the
randomly generated strings is set to this value
(default is 0). If both minimum and maximum are
provided, minimum must be less than or equal to
max. Value needs to be within [0, 200].
If the min is outside the accepted ranges for
strings columns and 'x' and 'y' columns for
point/shape/track, then those parameters will not
be set; however, an error will not be thrown in
such a case. It is the responsibility of the user
to use the
all parameter judiciously.
- 'max': For numerical columns, the
maximum of the generated values is set to this
value. Default is 99999. For point, shape, and
track columns, max for numeric 'x' and 'y' columns
needs to be within [-180, 180] and [-90, 90],
respectively. The default minimum possible values
for these columns in such cases are 180.0 and 90.0.
For string columns, the maximum length of the
randomly generated strings is set to this value
(default is 200). If both minimum and maximum are
provided, *max* must be greater than or equal to
*min*. Value needs to be within [0, 200].
If the *max* is outside the accepted ranges for
strings columns and 'x' and 'y' columns for
point/shape/track, then those parameters will not
be set; however, an error will not be thrown in
such a case. It is the responsibility of the user
to use the
all parameter judiciously.
- 'interval': If specified, generate
values for all columns evenly spaced with the given
interval value. If a max value is specified for a
given column the data is randomly generated between
min and max and decimated down to the interval. If
no max is provided the data is linerally generated
starting at the minimum value (instead of
generating random data). For non-decimated
string-type columns the interval value is ignored.
Instead the values are generated following the
pattern: 'attrname_creationIndex#', i.e. the column
name suffixed with an underscore and a running
counter (starting at 0). For string types with
limited size (eg char4) the prefix is dropped. No
nulls will be generated for nullable columns.
- 'null_percentage': If specified, then
generate the given percentage of the count as nulls
for all nullable columns. This option will be
ignored for non-nullable columns. The value must
be within the range [0, 1.0]. The default value is
5% (0.05).
- 'cardinality': If specified, limit the
randomly generated values to a fixed set. Not
allowed on a column with interval specified, and is
not applicable to WKT or Track-specific columns.
The value must be greater than 0. This option is
disabled by default.
- 'attr_name': Use the desired column
name in place of
attr_name , and set
the following parameters for the column specified.
This overrides any parameter set by
all .
- 'min': For numerical columns, the
minimum of the generated values is set to this
value. Default is -99999. For point, shape, and
track columns, min for numeric 'x' and 'y' columns
needs to be within [-180, 180] and [-90, 90],
respectively. The default minimum possible values
for these columns in such cases are -180.0 and
-90.0. For the 'TIMESTAMP' column, the default
minimum corresponds to Jan 1, 2010.
For string columns, the minimum length of the
randomly generated strings is set to this value
(default is 0). If both minimum and maximum are
provided, minimum must be less than or equal to
max. Value needs to be within [0, 200].
If the min is outside the accepted ranges for
strings columns and 'x' and 'y' columns for
point/shape/track, then those parameters will not
be set; however, an error will not be thrown in
such a case. It is the responsibility of the user
to use the
all parameter judiciously.
- 'max': For numerical columns, the
maximum of the generated values is set to this
value. Default is 99999. For point, shape, and
track columns, max for numeric 'x' and 'y' columns
needs to be within [-180, 180] and [-90, 90],
respectively. The default minimum possible values
for these columns in such cases are 180.0 and 90.0.
For string columns, the maximum length of the
randomly generated strings is set to this value
(default is 200). If both minimum and maximum are
provided, *max* must be greater than or equal to
*min*. Value needs to be within [0, 200].
If the *max* is outside the accepted ranges for
strings columns and 'x' and 'y' columns for
point/shape/track, then those parameters will not
be set; however, an error will not be thrown in
such a case. It is the responsibility of the user
to use the
all parameter judiciously.
- 'interval': If specified, generate
values for all columns evenly spaced with the given
interval value. If a max value is specified for a
given column the data is randomly generated between
min and max and decimated down to the interval. If
no max is provided the data is linerally generated
starting at the minimum value (instead of
generating random data). For non-decimated
string-type columns the interval value is ignored.
Instead the values are generated following the
pattern: 'attrname_creationIndex#', i.e. the column
name suffixed with an underscore and a running
counter (starting at 0). For string types with
limited size (eg char4) the prefix is dropped. No
nulls will be generated for nullable columns.
- 'null_percentage': If specified and if
this column is nullable, then generate the given
percentage of the count as nulls. This option will
result in an error if the column is not nullable.
The value must be within the range [0, 1.0]. The
default value is 5% (0.05).
- 'cardinality': If specified, limit the
randomly generated values to a fixed set. Not
allowed on a column with interval specified, and is
not applicable to WKT or Track-specific columns.
The value must be greater than 0. This option is
disabled by default.
- 'track_length': This key-map pair is
only valid for track data sets (an error is thrown
otherwise). No nulls would be generated for
nullable columns.
- 'min': Minimum possible length for
generated series; default is 100 records per
series. Must be an integral value within the range
[1, 500]. If both min and max are specified, min
must be less than or equal to max.
- 'max': Maximum possible length for
generated series; default is 500 records per
series. Must be an integral value within the range
[1, 500]. If both min and max are specified, max
must be greater than or equal to min.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
insert_records_random_request(request, callback) → {Object}
Generates a specified number of random records and adds them to the given
table. There is an optional parameter that allows the user to customize the
ranges of the column values. It also allows the user to specify linear
profiles for some or all columns in which case linear values are generated
rather than random ones. Only individual tables are supported for this
operation.
This operation is synchronous, meaning that a response will not be returned
until all random records are fully available.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
insert_records_request(request, callback) → {Object}
Adds multiple records to the specified table. The operation is synchronous,
meaning that a response will not be returned until all the records are fully
inserted and available. The response payload provides the counts of the
number of records actually inserted and/or updated, and can provide the
unique identifier of each added record.
The options
parameter can be used to customize this function's
behavior.
The update_on_existing_pk
option specifies the record collision
policy for inserting into a table with a primary
key, but is ignored if no primary key exists.
The return_record_ids
option indicates that the database should
return the unique identifiers of inserted records.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
insert_symbol(symbol_id, symbol_format, symbol_data, options, callback) → {Object}
Adds a symbol or icon (i.e. an image) to represent data points when data is
rendered visually. Users must provide the symbol identifier (string), a
format (currently supported: 'svg' and 'svg_path'), the data for the symbol,
and any additional optional parameter (e.g. color). To have a symbol used
for rendering create a table with a string column named 'SYMBOLCODE' (along
with 'x' or 'y' for example). Then when the table is rendered (via
WMS) if the
'dosymbology' parameter is 'true' then the value of the 'SYMBOLCODE' column
is used to pick the symbol displayed for each point.
Parameters:
Name |
Type |
Description |
symbol_id |
String
|
The id of the symbol being added. This is the
same id that should be in the 'SYMBOLCODE' column
for objects using this symbol |
symbol_format |
String
|
Specifies the symbol format. Must be either
'svg' or 'svg_path'.
Supported values:
|
symbol_data |
String
|
The actual symbol data. If
symbol_format is 'svg' then this
should be the raw bytes representing an svg
file. If symbol_format is svg path
then this should be an svg path string, for
example:
'M25.979,12.896,5.979,12.896,5.979,19.562,25.979,19.562z' |
options |
Object
|
Optional parameters.
- 'color': If
symbol_format
is 'svg' this is ignored. If
symbol_format is 'svg_path' then this
option specifies the color (in RRGGBB hex format)
of the path. For example, to have the path rendered
in red, used 'FF0000'. If 'color' is not provided
then '00FF00' (i.e. green) is used by default.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
insert_symbol_request(request, callback) → {Object}
Adds a symbol or icon (i.e. an image) to represent data points when data is
rendered visually. Users must provide the symbol identifier (string), a
format (currently supported: 'svg' and 'svg_path'), the data for the symbol,
and any additional optional parameter (e.g. color). To have a symbol used
for rendering create a table with a string column named 'SYMBOLCODE' (along
with 'x' or 'y' for example). Then when the table is rendered (via
WMS) if the
'dosymbology' parameter is 'true' then the value of the 'SYMBOLCODE' column
is used to pick the symbol displayed for each point.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
kill_proc(run_id, options, callback) → {Object}
Kills a running proc instance.
Parameters:
Name |
Type |
Description |
run_id |
String
|
The run ID of a running proc instance. If a proc
with a matching run ID is not found or the proc
instance has already completed, no procs will be
killed. If not specified, all running proc instances
will be killed. |
options |
Object
|
Optional parameters.
- 'run_tag': If
run_id is
specified, kill the proc instance that has a
matching run ID and a matching run tag that was
provided to GPUdb#execute_proc . If
run_id is not specified, kill the proc
instance(s) where a matching run tag was provided
to GPUdb#execute_proc . The default
value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
kill_proc_request(request, callback) → {Object}
Kills a running proc instance.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
lock_table(table_name, lock_type, options, callback) → {Object}
Manages global access to a table's data. By default a table has a
lock_type
of read_write
, indicating all operations
are permitted. A user may request a read_only
or a
write_only
lock, after which only read or write operations,
respectively, are permitted on the table until the lock is removed. When
lock_type
is no_access
then no operations are
permitted on the table. The lock status can be queried by setting
lock_type
to status
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be locked. It must be a
currently existing table, collection, or view. |
lock_type |
String
|
The type of lock being applied to the table.
Setting it to status will return the
current lock status of the table without changing
it.
Supported values:
- 'status': Show locked status
- 'no_access': Allow no read/write
operations
- 'read_only': Allow only read
operations
- 'write_only': Allow only write
operations
- 'read_write': Allow all read/write
operations
The default value is 'status'. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
lock_table_request(request, callback) → {Object}
Manages global access to a table's data. By default a table has a
lock_type
of read_write
, indicating all operations
are permitted. A user may request a read_only
or a
write_only
lock, after which only read or write operations,
respectively, are permitted on the table until the lock is removed. When
lock_type
is no_access
then no operations are
permitted on the table. The lock status can be queried by setting
lock_type
to status
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
match_graph(graph_name, sample_points, solve_method, solution_table, options, callback) → {Object}
Matches a directed route implied by a given set of
latitude/longitude points to an existing underlying road network graph using
a
given solution type.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, the
Graph REST Tutorial,
and/or some
/match/graph examples
before using this endpoint.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the underlying geospatial graph resource
to match to using sample_points . |
sample_points |
Array.<String>
|
Sample points used to match to an
underlying geospatial
graph. Sample points must be specified
using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with: existing
column names, e.g.,
'table.column AS SAMPLE_X'; expressions,
e.g.,
'ST_MAKEPOINT(table.x, table.y) AS
SAMPLE_WKTPOINT'; or constant values, e.g.,
'{1, 2, 10} AS SAMPLE_TRIPID'. |
solve_method |
String
|
The type of solver to use for graph matching.
Supported values:
- 'markov_chain': Matches
sample_points to the graph using
the Hidden Markov Model (HMM)-based method,
which conducts a range-tree closest-edge
search to find the best combinations of
possible road segments
(num_segments ) for each sample
point to create the best route. The route is
secured one point at a time while looking
ahead chain_width number of
points, so the prediction is corrected after
each point. This solution type is the most
accurate but also the most computationally
intensive. Related options:
num_segments and
chain_width .
- 'match_od_pairs': Matches
sample_points to find the most
probable path between origin and destination
pairs with cost constraints.
- 'match_supply_demand': Matches
sample_points to optimize
scheduling multiple supplies (trucks) with
varying sizes to varying demand sites with
varying capacities per depot. Related options:
partial_loading and
max_combinations .
- 'match_batch_solves': Matches
sample_points source and
destination pairs for the shortest path solves
in batch mode.
The default value is 'markov_chain'. |
solution_table |
String
|
The name of the table used to store the
results; this table contains a track of geospatial points
for the matched portion of the graph, a
track ID, and a score value. Also outputs a
details table containing a trip ID (that
matches the track ID), the
latitude/longitude pair, the timestamp the
point was recorded at, and an edge ID
corresponding to the matched road segment.
Has the same naming restrictions as tables. Must not be an
existing table of the same name. |
options |
Object
|
Additional parameters
- 'gps_noise': GPS noise value (in
meters) to remove redundant sample points. Use -1
to disable noise reduction. The default value
accounts for 95% of point variation (+ or -5
meters). The default value is '5.0'.
- 'num_segments': Maximum number of
potentially matching road segments for each sample
point. For the
markov_chain solver,
the default is 3. The default value is '3'.
- 'search_radius': Maximum search radius
used when snapping sample points onto potentially
matching surrounding segments. The default value
corresponds to approximately 100 meters. The
default value is '0.001'.
- 'chain_width': For the
markov_chain solver only. Length of
the sample points lookahead window within the
Markov kernel; the larger the number, the more
accurate the solution. The default value is '9'.
- 'source': Optional WKT starting point
from
sample_points for the solver. The
default behavior for the endpoint is to use time to
determine the starting point. The default value is
'POINT NULL'.
- 'destination': Optional WKT ending
point from
sample_points for the
solver. The default behavior for the endpoint is to
use time to determine the destination point. The
default value is 'POINT NULL'.
- 'partial_loading': For the
match_supply_demand solver only. When
false (non-default), trucks do not off-load at the
demand (store) side if the remainder is less than
the store's need
Supported values:
- 'true': Partial off-loading at
multiple store (demand) locations
- 'false': No partial off-loading
allowed if supply is less than the store's demand.
The default value is 'true'.
- 'max_combinations': For the
match_supply_demand solver only. This
is the cutoff for the number of generated
combinations for sequencing the demand locations -
can increase this up to 2M. The default value is
'10000'.
- 'left_turn_penalty': This will add an
additonal weight over the edges labelled as 'left
turn' if the 'add_turn' option parameter of the
GPUdb#create_graph was invoked at
graph creation. The default value is '0.0'.
- 'right_turn_penalty': This will add an
additonal weight over the edges labelled as' right
turn' if the 'add_turn' option parameter of the
GPUdb#create_graph was invoked at
graph creation. The default value is '0.0'.
- 'intersection_penalty': This will add
an additonal weight over the edges labelled as
'intersection' if the 'add_turn' option parameter
of the
GPUdb#create_graph was invoked
at graph creation. The default value is '0.0'.
- 'sharp_turn_penalty': This will add an
additonal weight over the edges labelled as 'sharp
turn' or 'u-turn' if the 'add_turn' option
parameter of the
GPUdb#create_graph
was invoked at graph creation. The default value
is '0.0'.
- 'aggregated_output': For the
match_supply_demand solver only. When
it is true (default), each record in the output
table shows a particular truck's scheduled
cumulative round trip path (MULTILINESTRING) and
the corresponding aggregated cost. Otherwise, each
record shows a single scheduled truck route
(LINESTRING) towards a particular demand location
(store id) with its corresponding cost. The
default value is 'true'.
- 'max_trip_cost': For the
match_supply_demand solver only. If
this constraint is greater than zero (default) then
the trucks will skip travelling from one demand
location to another if the cost between them is
greater than this number (distance or time). Zero
(default) value means no check is performed. The
default value is '0.0'.
- 'filter_folding_paths': For the
markov_chain solver only. When true
(non-default), the paths per sequence combination
is checked for folding over patterns and can
significantly increase the execution time depending
on the chain width and the number of gps samples.
Supported values:
- 'true': Filter out the folded paths.
- 'false': Do not filter out the folded
paths
The default value is 'false'.
- 'unit_unloading_cost': For the
match_supply_demand solver only. The
unit cost per load amount to be delivered. If this
value is greater than zero (default) then the
additional cost of this unit load multiplied by the
total dropped load will be added over to the trip
cost to the demand location. The default value is
'0.0'.
- 'max_num_threads': For the
markov_chain solver only. If specified
(greater than zero), the maximum number of threads
will not be greater than the specified value. It
can be lower due to the memory and the number cores
available. Default value of zero allows the
algorithm to set the maximal number of threads
within these constraints. The default value is
'0'.
- 'truck_service_limit': For the
match_supply_demand solver only. If
specified (greather than zero), any truck's total
service cost (distance or time) will be limited by
the specified value including multiple rounds (if
set). The default value is '0.0'.
- 'enable_truck_reuse': For the
match_supply_demand solver only. If
specified (true), all trucks can be scheduled for
second rounds from their originating depots.
Supported values:
- 'true': Allows reusing trucks for
scheduling again.
- 'false': Trucks are scheduled only
once from their depots.
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
match_graph_request(request, callback) → {Object}
Matches a directed route implied by a given set of
latitude/longitude points to an existing underlying road network graph using
a
given solution type.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, the
Graph REST Tutorial,
and/or some
/match/graph examples
before using this endpoint.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
merge_records(table_name, source_table_names, field_maps, options, callback) → {Object}
Create a new empty result table (specified by
table_name
), and
insert all records from source tables (specified by
source_table_names
) based on the field mapping information
(specified by
field_maps
).
For merge records details and examples, see Merge Records.
For limitations, see Merge Records Limitations and Cautions.
The field map (specified by field_maps
) holds the
user-specified maps of target table column names to source table columns.
The array of field_maps
must match one-to-one with the
source_table_names
, e.g., there's a map present in
field_maps
for each table listed in
source_table_names
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
The new result table name for the records to be
merged. Must NOT be an existing table. |
source_table_names |
Array.<String>
|
The list of source table names to get
the records from. Must be existing
table names. |
field_maps |
Array.<Object>
|
Contains a list of source/target column
mappings, one mapping for each source table
listed in source_table_names
being merged into the target table specified
by table_name . Each mapping
contains the target column names (as keys)
that the data in the mapped source columns or
column expressions (as values) will
be merged into. All of the source columns
being merged into a given target column must
match in type, as that type will determine the
type of the new target column. |
options |
Object
|
Optional parameters.
- 'collection_name': Name of a
collection which is to contain the newly created
merged table specified by
table_name .
If the collection provided is non-existent, the
collection will be automatically created. If empty,
then the newly created merged table will be a
top-level table.
- 'is_replicated': Indicates the distribution scheme for the data
of the merged table specified in
table_name . If true, the table will
be replicated. If false, the table
will be randomly sharded.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL of the merged table specified
in
table_name .
- 'persist': If
true , then
the table specified in table_name will
be persisted and will not expire unless a
ttl is specified. If
false , then the table will be an
in-memory table and will expire unless a
ttl is specified otherwise.
Supported values:
The default value is 'true'.
- 'chunk_size': Indicates the number of
records per chunk to be used for the merged table
specified in
table_name .
- 'view_id': view this result table is
part of. The default value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
merge_records_request(request, callback) → {Object}
Create a new empty result table (specified by
table_name
), and
insert all records from source tables (specified by
source_table_names
) based on the field mapping information
(specified by
field_maps
).
For merge records details and examples, see Merge Records.
For limitations, see Merge Records Limitations and Cautions.
The field map (specified by field_maps
) holds the
user-specified maps of target table column names to source table columns.
The array of field_maps
must match one-to-one with the
source_table_names
, e.g., there's a map present in
field_maps
for each table listed in
source_table_names
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
modify_graph(graph_name, nodes, edges, weights, restrictions, options, callback) → {Object}
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph resource to modify. |
nodes |
Array.<String>
|
Nodes with which to update existing
nodes in graph specified by
graph_name . Review Nodes for more information. Nodes
must be specified using identifiers; identifiers are
grouped as combinations. Identifiers can be
used with existing column names, e.g.,
'table.column AS NODE_ID', expressions, e.g.,
'ST_MAKEPOINT(column1, column2) AS NODE_WKTPOINT',
or raw values, e.g., '{9, 10, 11} AS NODE_ID'. If
using raw values in an identifier combination, the
number of values specified must match across the
combination. Identifier combination(s) do not have
to match the method used to create the graph, e.g.,
if column names were specified to create the graph,
expressions or raw values could also be used to
modify the graph. |
edges |
Array.<String>
|
Edges with which to update existing
edges in graph specified by
graph_name . Review Edges for more information. Edges
must be specified using identifiers; identifiers are
grouped as combinations. Identifiers can be
used with existing column names, e.g.,
'table.column AS EDGE_ID', expressions, e.g.,
'SUBSTR(column, 1, 6) AS EDGE_NODE1_NAME', or raw
values, e.g., "{'family', 'coworker'} AS
EDGE_LABEL". If using raw values in an identifier
combination, the number of values specified must
match across the combination. Identifier
combination(s) do not have to match the method used
to create the graph, e.g., if column names were
specified to create the graph, expressions or raw
values could also be used to modify the graph. |
weights |
Array.<String>
|
Weights with which to update existing
weights in graph specified by
graph_name . Review Weights for more information.
Weights must be specified using identifiers; identifiers are
grouped as combinations. Identifiers can
be used with existing column names, e.g.,
'table.column AS WEIGHTS_EDGE_ID', expressions,
e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED',
or raw values, e.g., '{4, 15} AS
WEIGHTS_VALUESPECIFIED'. If using raw values in
an identifier combination, the number of values
specified must match across the combination.
Identifier combination(s) do not have to match
the method used to create the graph, e.g., if
column names were specified to create the graph,
expressions or raw values could also be used to
modify the graph. |
restrictions |
Array.<String>
|
Restrictions with which to update existing
restrictions in graph specified
by graph_name . Review Restrictions for more
information. Restrictions must be specified
using identifiers; identifiers
are grouped as combinations. Identifiers
can be used with existing column names,
e.g., 'table.column AS
RESTRICTIONS_EDGE_ID', expressions, e.g.,
'column/2 AS RESTRICTIONS_VALUECOMPARED', or
raw values, e.g., '{0, 0, 0, 1} AS
RESTRICTIONS_ONOFFCOMPARED'. If using raw
values in an identifier combination, the
number of values specified must match across
the combination. Identifier combination(s)
do not have to match the method used to
create the graph, e.g., if column names were
specified to create the graph, expressions
or raw values could also be used to modify
the graph. |
options |
Object
|
Optional parameters.
- 'restriction_threshold_value':
Value-based restriction comparison. Any node or
edge with a RESTRICTIONS_VALUECOMPARED value
greater than the
restriction_threshold_value will not
be included in the graph.
- 'export_create_results': If set to
true , returns the graph topology in
the response as arrays.
Supported values:
The default value is 'false'.
- 'enable_graph_draw': If set to
true , adds a 'EDGE_WKTLINE' column
identifier to the specified
graph_table so the graph can be viewed
via WMS; for social and non-geospatial graphs, the
'EDGE_WKTLINE' column identifier will be populated
with spatial coordinates derived from a flattening
layout algorithm so the graph can still be viewed.
Supported values:
The default value is 'false'.
- 'save_persist': If set to
true , the graph will be saved in the
persist directory (see the config
reference for more information). If set to
false , the graph will be removed when
the graph server is shutdown.
Supported values:
The default value is 'false'.
- 'add_table_monitor': Adds a table
monitor to every table used in the creation of the
graph; this table monitor will trigger the graph to
update dynamically upon inserts to the source
table(s). Note that upon database restart, if
save_persist is also set to
true , the graph will be fully
reconstructed and the table monitors will be
reattached. For more details on table monitors, see
GPUdb#create_table_monitor .
Supported values:
The default value is 'false'.
- 'graph_table': If specified, the
created graph is also created as a table with the
given name and following identifier columns:
'EDGE_ID', 'EDGE_NODE1_ID', 'EDGE_NODE2_ID'. If
left blank, no table is created. The default value
is ''.
- 'remove_label_only': When RESTRICTIONS
on labeled entities requested, if set to true this
will NOT delete the entity but only the label
associated with the entity. Otherwise (default),
it'll delete the label AND the entity.
Supported values:
The default value is 'false'.
- 'add_turns': Adds dummy 'pillowed'
edges around intersection nodes where there are
more than three edges so that additional weight
penalties can be imposed by the solve endpoints.
(increases the total number of edges).
Supported values:
The default value is 'false'.
- 'turn_angle': Value in degrees
modifies the thresholds for attributing right,
left, sharp turns, and intersections. It is the
vertical deviation angle from the incoming edge to
the intersection node. The larger the value, the
larger the threshold for sharp turns and
intersections; the smaller the value, the larger
the threshold for right and left turns; 0 <
turn_angle < 90. The default value is '60'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
modify_graph_request(request, callback) → {Object}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
query_graph(graph_name, queries, restrictions, adjacency_table, rings, options, callback) → {Object}
Employs a topological query on a network graph generated a-priori by
GPUdb#create_graph
and returns a list of adjacent edge(s) or
node(s), also known as an adjacency list, depending on what's been provided
to the endpoint; providing edges will return nodes and providing nodes will
return edges.
To determine the node(s) or edge(s) adjacent to a value from a given column,
provide a list of values to queries
. This field can be
populated with column values from any table as long as the type is supported
by the given identifier. See Query Identifiers for more information.
To return the adjacency list in the response, leave
adjacency_table
empty. To return the adjacency list in a table
and not in the response, provide a value to adjacency_table
and
set export_query_results
to false
. To return the
adjacency list both in a table and the response, provide a value to
adjacency_table
and set export_query_results
to
true
.
IMPORTANT: It's highly recommended that you review the Network
Graphs & Solvers concepts documentation, the Graph
REST Tutorial, and/or some /query/graph examples before using this endpoint.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph resource to query. |
queries |
Array.<String>
|
Nodes or edges to be queried specified using query identifiers. Identifiers
can be used with existing column names, e.g.,
'table.column AS QUERY_NODE_ID', raw values,
e.g., '{0, 2} AS QUERY_NODE_ID', or expressions,
e.g., 'ST_MAKEPOINT(table.x, table.y) AS
QUERY_NODE_WKTPOINT'. Multiple values can be
provided as long as the same identifier is used
for all values. If using raw values in an
identifier combination, the number of values
specified must match across the combination. |
restrictions |
Array.<String>
|
Additional restrictions to apply to the
nodes/edges of an existing graph.
Restrictions must be specified using identifiers; identifiers
are grouped as combinations. Identifiers
can be used with existing column names,
e.g., 'table.column AS
RESTRICTIONS_EDGE_ID', expressions, e.g.,
'column/2 AS RESTRICTIONS_VALUECOMPARED', or
raw values, e.g., '{0, 0, 0, 1} AS
RESTRICTIONS_ONOFFCOMPARED'. If using raw
values in an identifier combination, the
number of values specified must match across
the combination. |
adjacency_table |
String
|
Name of the table to store the resulting
adjacencies. If left blank, the query
results are instead returned in the
response even if
export_query_results is set to
false . If the
'QUERY_TARGET_NODE_LABEL' query identifier is used
in queries , then two
additional columns will be available:
'PATH_ID' and 'RING_ID'. See
Using Labels for more
information. |
rings |
Number
|
Sets the number of rings around the node to query for
adjacency, with '1' being the edges directly attached
to the queried node. Also known as number of hops.
For example, if it is set to '2', the edge(s)
directly attached to the queried node(s) will be
returned; in addition, the edge(s) attached to the
node(s) attached to the initial ring of edge(s)
surrounding the queried node(s) will be returned. If
the value is set to '0', any nodes that meet the
criteria in queries and
restrictions will be returned. This
parameter is only applicable when querying nodes. |
options |
Object
|
Additional parameters
- 'force_undirected': If set to
true , all inbound edges and outbound
edges relative to the node will be returned. If set
to false , only outbound edges relative
to the node will be returned. This parameter is
only applicable if the queried graph
graph_name is directed and when
querying nodes. Consult Directed Graphs for more details.
Supported values:
The default value is 'false'.
- 'limit': When specified, limits the
number of query results. Note that if the
target_nodes_table is provided, the
size of the corresponding table will be limited by
the limit value. The default value is
an empty dict ( {} ).
- 'target_nodes_table': Name of the
table to store the list of the final nodes reached
during the traversal. If this value is left as the
default, the table name will default to the
adjacency_table value plus a '_nodes'
suffix, e.g., '_nodes'. The
default value is ''.
- 'restriction_threshold_value':
Value-based restriction comparison. Any node or
edge with a RESTRICTIONS_VALUECOMPARED value
greater than the
restriction_threshold_value will not
be included in the solution.
- 'export_query_results': Returns query
results in the response. If set to
true , the
adjacency_list_int_array (if the query
was based on IDs),
adjacency_list_string_array (if the
query was based on names), or
adjacency_list_wkt_array (if the query
was based on WKTs) will be populated with the
results. If set to false , none of the
arrays will be populated.
Supported values:
The default value is 'false'.
- 'enable_graph_draw': If set to
true , adds a WKT-type column named
'QUERY_EDGE_WKTLINE' to the given
adjacency_table and inputs WKT values
from the source graph (if available) or
auto-generated WKT values (if there are no WKT
values in the source graph). A subsequent call to
the /wms endpoint can then be made to
display the query results on a map.
Supported values:
The default value is 'false'.
- 'and_labels': If set to
true , the result of the query has
entities that satisfy all of the target labels,
instead of any.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
query_graph_request(request, callback) → {Object}
Employs a topological query on a network graph generated a-priori by
GPUdb#create_graph
and returns a list of adjacent edge(s) or
node(s), also known as an adjacency list, depending on what's been provided
to the endpoint; providing edges will return nodes and providing nodes will
return edges.
To determine the node(s) or edge(s) adjacent to a value from a given column,
provide a list of values to queries
. This field can be
populated with column values from any table as long as the type is supported
by the given identifier. See Query Identifiers for more information.
To return the adjacency list in the response, leave
adjacency_table
empty. To return the adjacency list in a table
and not in the response, provide a value to adjacency_table
and
set export_query_results
to false
. To return the
adjacency list both in a table and the response, provide a value to
adjacency_table
and set export_query_results
to
true
.
IMPORTANT: It's highly recommended that you review the Network
Graphs & Solvers concepts documentation, the Graph
REST Tutorial, and/or some /query/graph examples before using this endpoint.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
revoke_permission_proc(name, permission, proc_name, options, callback) → {Object}
Revokes a proc-level permission from a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role from which the permission
will be revoked. Must be an existing user or role. |
permission |
String
|
Permission to revoke from the user or role.
Supported values:
- 'proc_execute': Execute access to
the proc.
|
proc_name |
String
|
Name of the proc to which the permission grants
access. Must be an existing proc, or an empty
string if the permission grants access to all
procs. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
revoke_permission_proc_request(request, callback) → {Object}
Revokes a proc-level permission from a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
revoke_permission_system(name, permission, options, callback) → {Object}
Revokes a system-level permission from a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role from which the permission
will be revoked. Must be an existing user or role. |
permission |
String
|
Permission to revoke from the user or role.
Supported values:
- 'system_admin': Full access to all
data and system functions.
- 'system_user_admin': Access to
administer users and roles that do not have
system_admin permission.
- 'system_write': Read and write
access to all tables.
- 'system_read': Read-only access to
all tables.
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
revoke_permission_system_request(request, callback) → {Object}
Revokes a system-level permission from a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
revoke_permission_table(name, permission, table_name, options, callback) → {Object}
Revokes a table-level permission from a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role from which the permission
will be revoked. Must be an existing user or role. |
permission |
String
|
Permission to revoke from the user or role.
Supported values:
- 'table_admin': Full read/write and
administrative access to the table.
- 'table_insert': Insert access to
the table.
- 'table_update': Update access to
the table.
- 'table_delete': Delete access to
the table.
- 'table_read': Read access to the
table.
|
table_name |
String
|
Name of the table to which the permission grants
access. Must be an existing table, collection,
or view. |
options |
Object
|
Optional parameters.
- 'columns': Apply security to these
columns, comma-separated. The default value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
revoke_permission_table_request(request, callback) → {Object}
Revokes a table-level permission from a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
revoke_role(role, member, options, callback) → {Object}
Revokes membership in a role from a user or role.
Parameters:
Name |
Type |
Description |
role |
String
|
Name of the role in which membership will be revoked.
Must be an existing role. |
member |
String
|
Name of the user or role that will be revoked
membership in role . Must be an existing
user or role. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
revoke_role_request(request, callback) → {Object}
Revokes membership in a role from a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_graph(graph_name, options, callback) → {Object}
Shows information and characteristics of graphs that exist on the graph
server.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph on which to retrieve
information. If left as the default value,
information about all graphs is returned. |
options |
Object
|
Optional parameters.
- 'show_original_request': If set to
true , the request that was originally
used to create the graph is also returned as JSON.
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_graph_request(request, callback) → {Object}
Shows information and characteristics of graphs that exist on the graph
server.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_proc(proc_name, options, callback) → {Object}
Shows information about a proc.
Parameters:
Name |
Type |
Description |
proc_name |
String
|
Name of the proc to show information about. If
specified, must be the name of a currently
existing proc. If not specified, information
about all procs will be returned. |
options |
Object
|
Optional parameters.
- 'include_files': If set to
true , the files that make up the proc
will be returned. If set to false , the
files will not be returned.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_proc_request(request, callback) → {Object}
Shows information about a proc.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_proc_status(run_id, options, callback) → {Object}
Shows the statuses of running or completed proc instances. Results are
grouped by run ID (as returned from
GPUdb#execute_proc
) and
data segment ID (each invocation of the proc command on a data segment is
assigned a data segment ID).
Parameters:
Name |
Type |
Description |
run_id |
String
|
The run ID of a specific proc instance for which the
status will be returned. If a proc with a matching
run ID is not found, the response will be empty. If
not specified, the statuses of all executed proc
instances will be returned. |
options |
Object
|
Optional parameters.
- 'clear_complete': If set to
true , if a proc instance has completed
(either successfully or unsuccessfully) then its
status will be cleared and no longer returned in
subsequent calls.
Supported values:
The default value is 'false'.
- 'run_tag': If
run_id is
specified, return the status for a proc instance
that has a matching run ID and a matching run tag
that was provided to
GPUdb#execute_proc . If
run_id is not specified, return
statuses for all proc instances where a matching
run tag was provided to
GPUdb#execute_proc . The default
value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_proc_status_request(request, callback) → {Object}
Shows the statuses of running or completed proc instances. Results are
grouped by run ID (as returned from
GPUdb#execute_proc
) and
data segment ID (each invocation of the proc command on a data segment is
assigned a data segment ID).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_resource_groups(names, options, callback) → {Object}
Requests resource group properties.
Returns detailed information about the requested resource groups.
Parameters:
Name |
Type |
Description |
names |
Array.<String>
|
List of names of groups to be shown. A single entry
with an empty string returns all groups. |
options |
Object
|
Optional parameters.
- 'show_default_values': If
true include values of fields that are
based on the default resource group.
Supported values:
The default value is 'true'.
- 'show_default_group': If
true include the default resource
group in the response.
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_resource_groups_request(request, callback) → {Object}
Requests resource group properties.
Returns detailed information about the requested resource groups.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_resource_statistics(options, callback) → {Object}
Requests various statistics for storage/memory tiers and resource groups.
Returns statistics on a per-rank basis.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_resource_statistics_request(request, callback) → {Object}
Requests various statistics for storage/memory tiers and resource groups.
Returns statistics on a per-rank basis.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_security(names, options, callback) → {Object}
Shows security information relating to users and/or roles. If the caller is
not a system administrator, only information relating to the caller and
their roles is returned.
Parameters:
Name |
Type |
Description |
names |
Array.<String>
|
A list of names of users and/or roles about which
security information is requested. If none are
provided, information about all users and roles
will be returned. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_security_request(request, callback) → {Object}
Shows security information relating to users and/or roles. If the caller is
not a system administrator, only information relating to the caller and
their roles is returned.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_sql_proc(procedure_name, options, callback) → {Object}
Shows information about SQL procedures, including the full definition of
each requested procedure.
Parameters:
Name |
Type |
Description |
procedure_name |
String
|
Name of the procedure for which to retrieve
the information. If blank, then information
about all procedures is returned. |
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
true , no error will be returned if the
requested procedure does not exist. If
false , an error will be returned if
the requested procedure does not exist.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_sql_proc_request(request, callback) → {Object}
Shows information about SQL procedures, including the full definition of
each requested procedure.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_statistics(table_names, options, callback) → {Object}
Retrieves the collected column statistics for the specified table.
Parameters:
Name |
Type |
Description |
table_names |
Array.<String>
|
Tables whose metadata will be fetched. All
provided tables must exist, or an error is
returned. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_statistics_request(request, callback) → {Object}
Retrieves the collected column statistics for the specified table.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_system_properties(options, callback) → {Object}
Returns server configuration and version related information to the caller.
The admin tool uses it to present server related information to the user.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters.
- 'properties': A list of comma
separated names of properties requested. If not
specified, all properties will be returned.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_system_properties_request(request, callback) → {Object}
Returns server configuration and version related information to the caller.
The admin tool uses it to present server related information to the user.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_system_status(options, callback) → {Object}
Provides server configuration and health related status to the caller. The
admin tool uses it to present server related information to the user.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters, currently unused. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_system_status_request(request, callback) → {Object}
Provides server configuration and health related status to the caller. The
admin tool uses it to present server related information to the user.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_system_timing(options, callback) → {Object}
Returns the last 100 database requests along with the request timing and
internal job id. The admin tool uses it to present request timing
information to the user.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters, currently unused. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_system_timing_request(request, callback) → {Object}
Returns the last 100 database requests along with the request timing and
internal job id. The admin tool uses it to present request timing
information to the user.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_table(table_name, options, callback) → {Object}
Retrieves detailed information about a table, view, or collection, specified
in
table_name
. If the supplied
table_name
is a
collection, the call can return information about either the collection
itself or the tables and views it contains. If
table_name
is
empty, information about all collections and top-level tables and views can
be returned.
If the option get_sizes
is set to
true
, then the number of records
in each table is returned (in sizes
and
full_sizes
), along with the total number of objects across all
requested tables (in total_size
and
total_full_size
).
For a collection, setting the show_children
option to
false
returns only information about the collection itself;
setting show_children
to true
returns a list of
tables and views contained in the collection, along with their corresponding
detail.
To retrieve a list of every table, view, and collection in the database, set
table_name
to '*' and show_children
to
true
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table for which to retrieve the
information. If blank, then information about
all collections and top-level tables and views
is returned. |
options |
Object
|
Optional parameters.
- 'force_synchronous': If
true then the table sizes will wait
for read lock before returning.
Supported values:
The default value is 'true'.
- 'get_sizes': If
true then
the number of records in each table, along with a
cumulative count, will be returned; blank,
otherwise.
Supported values:
The default value is 'false'.
- 'show_children': If
table_name is a collection, then
true will return information about the
children of the collection, and false
will return information about the collection
itself. If table_name is a table or
view, show_children must be
false . If table_name is
empty, then show_children must be
true .
Supported values:
The default value is 'true'.
- 'no_error_if_not_exists': If
false will return an error if the
provided table_name does not exist. If
true then it will return an empty
result.
Supported values:
The default value is 'false'.
- 'get_column_info': If
true then column info (memory usage,
etc) will be returned.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
Retrieves the user provided metadata for the specified tables.
Parameters:
Name |
Type |
Description |
table_names |
Array.<String>
|
Tables whose metadata will be fetched. All
provided tables must exist, or an error is
returned. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
Retrieves the user provided metadata for the specified tables.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_table_request(request, callback) → {Object}
Retrieves detailed information about a table, view, or collection, specified
in
table_name
. If the supplied
table_name
is a
collection, the call can return information about either the collection
itself or the tables and views it contains. If
table_name
is
empty, information about all collections and top-level tables and views can
be returned.
If the option get_sizes
is set to
true
, then the number of records
in each table is returned (in sizes
and
full_sizes
), along with the total number of objects across all
requested tables (in total_size
and
total_full_size
).
For a collection, setting the show_children
option to
false
returns only information about the collection itself;
setting show_children
to true
returns a list of
tables and views contained in the collection, along with their corresponding
detail.
To retrieve a list of every table, view, and collection in the database, set
table_name
to '*' and show_children
to
true
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_tables_by_type(type_id, label, options, callback) → {Object}
Gets names of the tables whose type matches the given criteria. Each table
has a particular type. This type comprises the schema and properties of the
table and sometimes a type label. This function allows a look up of the
existing tables based on full or partial type information. The operation is
synchronous.
Parameters:
Name |
Type |
Description |
type_id |
String
|
Type id returned by a call to
GPUdb#create_type . |
label |
String
|
Optional user supplied label which can be used
instead of the type_id to retrieve all tables with
the given label. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_tables_by_type_request(request, callback) → {Object}
Gets names of the tables whose type matches the given criteria. Each table
has a particular type. This type comprises the schema and properties of the
table and sometimes a type label. This function allows a look up of the
existing tables based on full or partial type information. The operation is
synchronous.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_triggers(trigger_ids, options, callback) → {Object}
Retrieves information regarding the specified triggers or all existing
triggers currently active.
Parameters:
Name |
Type |
Description |
trigger_ids |
Array.<String>
|
List of IDs of the triggers whose information
is to be retrieved. An empty list means
information will be retrieved on all active
triggers. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_triggers_request(request, callback) → {Object}
Retrieves information regarding the specified triggers or all existing
triggers currently active.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_types(type_id, label, options, callback) → {Object}
Retrieves information for the specified data type ID or type label. For all
data types that match the input criteria, the database returns the type ID,
the type schema, the label (if available), and the type's column properties.
Parameters:
Name |
Type |
Description |
type_id |
String
|
Type Id returned in response to a call to
GPUdb#create_type . |
label |
String
|
Option string that was supplied by user in a call to
GPUdb#create_type . |
options |
Object
|
Optional parameters.
- 'no_join_types': When set to 'true',
no join types will be included.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
show_types_request(request, callback) → {Object}
Retrieves information for the specified data type ID or type label. For all
data types that match the input criteria, the database returns the type ID,
the type schema, the label (if available), and the type's column properties.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
solve_graph(graph_name, weights_on_edges, restrictions, solver_type, source_nodes, destination_nodes, solution_table, options, callback) → {Object}
Solves an existing graph for a type of problem (e.g., shortest path,
page rank, travelling salesman, etc.) using source nodes, destination nodes,
and
additional, optional weights and restrictions.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, the
Graph REST Tutorial,
and/or some
/solve/graph examples
before using this endpoint.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph resource to solve. |
weights_on_edges |
Array.<String>
|
Additional weights to apply to the edges
of an existing
graph. Weights must be specified using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing
column names, e.g.,
'table.column AS WEIGHTS_EDGE_ID',
expressions, e.g.,
'ST_LENGTH(wkt) AS
WEIGHTS_VALUESPECIFIED', or constant
values, e.g.,
'{4, 15, 2} AS WEIGHTS_VALUESPECIFIED'.
Any provided weights will be added
(in the case of
'WEIGHTS_VALUESPECIFIED') to or
multiplied with
(in the case of
'WEIGHTS_FACTORSPECIFIED') the existing
weight(s). If using
constant values in an identifier
combination, the number of values
specified
must match across the combination. |
restrictions |
Array.<String>
|
Additional restrictions to apply to the
nodes/edges of an
existing graph. Restrictions must be
specified using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing column
names, e.g.,
'table.column AS RESTRICTIONS_EDGE_ID',
expressions, e.g.,
'column/2 AS RESTRICTIONS_VALUECOMPARED', or
constant values, e.g.,
'{0, 0, 0, 1} AS
RESTRICTIONS_ONOFFCOMPARED'. If using
constant values in an
identifier combination, the number of values
specified must match across the
combination. If
remove_previous_restrictions is
set
to true , any
provided restrictions will replace the
existing restrictions. If
remove_previous_restrictions is
set to
false , any provided
restrictions will be added (in the case of
'RESTRICTIONS_VALUECOMPARED') to or
replaced (in the case of
'RESTRICTIONS_ONOFFCOMPARED'). |
solver_type |
String
|
The type of solver to use for the graph.
Supported values:
- 'SHORTEST_PATH': Solves for the
optimal (shortest) path based on weights and
restrictions from one source to destinations
nodes. Also known as the Dijkstra solver.
- 'PAGE_RANK': Solves for the
probability of each destination node being
visited based on the links of the graph
topology. Weights are not required to use this
solver.
- 'PROBABILITY_RANK': Solves for the
transitional probability (Hidden Markov) for
each node based on the weights (probability
assigned over given edges).
- 'CENTRALITY': Solves for the
degree of a node to depict how many pairs of
individuals that would have to go through the
node to reach one another in the minimum number
of hops. Also known as betweenness.
- 'MULTIPLE_ROUTING': Solves for
finding the minimum cost cumulative path for a
round-trip starting from the given source and
visiting each given destination node once then
returning to the source. Also known as the
travelling salesman problem.
- 'INVERSE_SHORTEST_PATH': Solves
for finding the optimal path cost for each
destination node to route to the source node.
Also known as inverse Dijkstra or the service
man routing problem.
- 'BACKHAUL_ROUTING': Solves for
optimal routes that connect remote asset nodes
to the fixed (backbone) asset nodes.
- 'ALLPATHS': Solves for paths that
would give costs between max and min solution
radia - Make sure to limit by the
'max_solution_targets' option. Min cost shoudl
be >= shortest_path cost.
The default value is 'SHORTEST_PATH'. |
source_nodes |
Array.<String>
|
It can be one of the nodal identifiers -
e.g: 'NODE_WKTPOINT' for source nodes. For
BACKHAUL_ROUTING , this list
depicts the fixed assets. |
destination_nodes |
Array.<String>
|
It can be one of the nodal identifiers
- e.g: 'NODE_WKTPOINT' for destination
(target) nodes. For
BACKHAUL_ROUTING , this
list depicts the remote assets. |
solution_table |
String
|
Name of the table to store the solution. |
options |
Object
|
Additional parameters
- 'max_solution_radius': For
SHORTEST_PATH and
INVERSE_SHORTEST_PATH solvers only.
Sets the maximum solution cost radius, which
ignores the destination_nodes list and
instead outputs the nodes within the radius sorted
by ascending cost. If set to '0.0', the setting is
ignored. The default value is '0.0'.
- 'min_solution_radius': For
SHORTEST_PATH and
INVERSE_SHORTEST_PATH solvers only.
Applicable only when
max_solution_radius is set. Sets the
minimum solution cost radius, which ignores the
destination_nodes list and instead
outputs the nodes within the radius sorted by
ascending cost. If set to '0.0', the setting is
ignored. The default value is '0.0'.
- 'max_solution_targets': For
SHORTEST_PATH and
INVERSE_SHORTEST_PATH solvers only.
Sets the maximum number of solution targets, which
ignores the destination_nodes list and
instead outputs no more than n number of nodes
sorted by ascending cost where n is equal to the
setting value. If set to 0, the setting is ignored.
The default value is '0'.
- 'export_solve_results': Returns
solution results inside the
result_per_destination_node array in
the response if set to true .
Supported values:
The default value is 'false'.
- 'remove_previous_restrictions': Ignore
the restrictions applied to the graph during the
creation stage and only use the restrictions
specified in this request if set to
true .
Supported values:
The default value is 'false'.
- 'restriction_threshold_value':
Value-based restriction comparison. Any node or
edge with a RESTRICTIONS_VALUECOMPARED value
greater than the
restriction_threshold_value will not
be included in the solution.
- 'uniform_weights': When specified,
assigns the given value to all the edges in the
graph. Note that weights provided in
weights_on_edges will override this
value.
- 'left_turn_penalty': This will add an
additonal weight over the edges labelled as 'left
turn' if the 'add_turn' option parameter of the
GPUdb#create_graph was invoked at
graph creation. The default value is '0.0'.
- 'right_turn_penalty': This will add an
additonal weight over the edges labelled as' right
turn' if the 'add_turn' option parameter of the
GPUdb#create_graph was invoked at
graph creation. The default value is '0.0'.
- 'intersection_penalty': This will add
an additonal weight over the edges labelled as
'intersection' if the 'add_turn' option parameter
of the
GPUdb#create_graph was invoked
at graph creation. The default value is '0.0'.
- 'sharp_turn_penalty': This will add an
additonal weight over the edges labelled as 'sharp
turn' or 'u-turn' if the 'add_turn' option
parameter of the
GPUdb#create_graph
was invoked at graph creation. The default value
is '0.0'.
- 'num_best_paths': For
MULTIPLE_ROUTING solvers only; sets
the number of shortest paths computed from each
node. This is the heuristic criterion. Default
value of zero allows the number to be computed
automatically by the solver. The user may want to
override this parameter to speed-up the solver.
The default value is '0'.
- 'max_num_combinations': For
MULTIPLE_ROUTING solvers only; sets
the cap on the combinatorial sequences generated.
If the default value of two millions is overridden
to a lesser value, it can potentially speed up the
solver. The default value is '2000000'.
- 'accurate_snaps': Valid for single
source destination pair solves if points are
described in NODE_WKTPOINT identifier types: When
true (default), it snaps to the nearest node of the
graph; otherwise, it searches for the closest
entity that could be an edge. For the latter case
(false), the solver modifies the resulting cost
with the weights proportional to the ratio of the
snap location within the edge. This may be an
over-kill when the performance is considered and
the difference is well less than 1 percent. In
batch runs, since the performance is of utmost
importance, the option is always considered
'false'.
Supported values:
The default value is 'true'.
- 'output_edge_path': If true then
concatenated edge ids will be added as the EDGE
path column of the solution table for each source
and target pair in shortest path solves.
Supported values:
The default value is 'false'.
- 'output_wkt_path': If true then
concatenated wkt line segments will be added as the
Wktroute column of the solution table for each
source and target pair in shortest path solves.
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
solve_graph_request(request, callback) → {Object}
Solves an existing graph for a type of problem (e.g., shortest path,
page rank, travelling salesman, etc.) using source nodes, destination nodes,
and
additional, optional weights and restrictions.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, the
Graph REST Tutorial,
and/or some
/solve/graph examples
before using this endpoint.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
submit_request(endpoint, request, callbackopt) → {Object}
Submits an arbitrary request to GPUdb.
If a callback function is provided, the request will be submitted
asynchronously, and the result (either a response or an error) will be passed
to the callback function upon completion.
If a callback function is not provided, the request will be submitted
synchronously and the response returned directly, and an exception will be
thrown if an error occurs.
In either case the function will attempt to cycle through available
GPUdb instances as provided in the constructor if an error occurs with the
server.
Parameters:
Name |
Type |
Attributes |
Description |
endpoint |
String
|
|
The endpoint to which to submit the request. |
request |
Object
|
|
The request object to submit. |
callback |
GPUdbCallback
|
<optional>
|
The callback function, if asynchronous
operation is desired. |
- Source:
Returns:
The response object, if no callback function is provided.
-
Type
-
Object
update_records(table_name, expressions, new_values_maps, data, options, callback) → {Object}
Runs multiple predicate-based updates in a single call. With the list of
given expressions, any matching record's column values will be updated as
provided in
new_values_maps
. There is also an optional
'upsert' capability where if a particular predicate doesn't match any
existing record, then a new record can be inserted.
Note that this operation can only be run on an original table and not on a
collection or a result view.
This operation can update primary key values. By default only 'pure primary
key' predicates are allowed when updating primary key values. If the primary
key for a table is the column 'attr1', then the operation will only accept
predicates of the form: "attr1 == 'foo'" if the attr1 column is being
updated. For a composite primary key (e.g. columns 'attr1' and 'attr2')
then this operation will only accept predicates of the form: "(attr1 ==
'foo') and (attr2 == 'bar')". Meaning, all primary key columns must appear
in an equality predicate in the expressions. Furthermore each 'pure primary
key' predicate must be unique within a given request. These restrictions
can be removed by utilizing some available options through
options
.Note that this operation can only be run on an original
table and not on a collection or a result view.
The update_on_existing_pk
option specifies the record collision
policy for tables with a primary key, and is ignored on tables with no primary key.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Table to be updated. Must be a currently
existing table and not a collection or view. |
expressions |
Array.<String>
|
A list of the actual predicates, one for each
update; format should follow the guidelines
here . |
new_values_maps |
Array.<Object>
|
List of new values for the matching
records. Each element is a map with
(key, value) pairs where the keys are the
names of the columns whose values are to
be updated; the values are the new
values. The number of elements in the
list should match the length of
expressions . |
data |
Array.<Object>
|
An optional list of new json-avro encoded objects to
insert, one for each update, to be added to the set
if the particular update did not affect any objects. |
options |
Object
|
Optional parameters.
- 'global_expression': An optional
global expression to reduce the search space of the
predicates listed in
expressions . The
default value is ''.
- 'bypass_safety_checks': When set to
true , all predicates are available for
primary key updates. Keep in mind that it is
possible to destroy data in this case, since a
single predicate may match multiple objects
(potentially all of records of a table), and then
updating all of those records to have the same
primary key will, due to the primary key uniqueness
constraints, effectively delete all but one of
those updated records.
Supported values:
The default value is 'false'.
- 'update_on_existing_pk': Specifies the
record collision policy for tables with a primary key when updating columns
of the primary key or inserting new
records. If
true , existing records
with primary key values that match those of a
record being updated or inserted will be replaced
by the updated and new records. If
false , existing records with matching
primary key values will remain unchanged, and the
updated or new records with primary key values that
match those of existing records will be discarded.
If the specified table does not have a primary key,
then this option has no effect.
Supported values:
- 'true': Overwrite existing records
when updated and inserted records have the same
primary keys
- 'false': Discard updated and inserted
records when the same primary keys already exist
The default value is 'false'.
- 'update_partition': Force qualifying
records to be deleted and reinserted so their
partition membership will be reevaluated.
Supported values:
The default value is 'false'.
- 'truncate_strings': If set to
true , any strings which are too long
for their charN string fields will be truncated to
fit.
Supported values:
The default value is 'false'.
- 'use_expressions_in_new_values_maps':
When set to
true , all new values in
new_values_maps are considered as
expression values. When set to false ,
all new values in new_values_maps are
considered as constants. NOTE: When
true , string constants will need to be
quoted to avoid being evaluated as expressions.
Supported values:
The default value is 'false'.
- 'record_id': ID of a single record to
be updated (returned in the call to
GPUdb#insert_records or
GPUdb#get_records_from_collection ).
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
update_records_by_series(table_name, world_table_name, view_name, reserved, options, callback) → {Object}
Updates the view specified by table_name
to include full series
(track) information from the world_table_name
for the series
(tracks) present in the view_name
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the view on which the update operation
will be performed. Must be an existing view. |
world_table_name |
String
|
Name of the table containing the complete
series (track) information. |
view_name |
String
|
name of the view containing the series (tracks)
which have to be updated. |
reserved |
Array.<String>
|
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
update_records_by_series_request(request, callback) → {Object}
Updates the view specified by table_name
to include full series
(track) information from the world_table_name
for the series
(tracks) present in the view_name
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
update_records_request(request, callback) → {Object}
Runs multiple predicate-based updates in a single call. With the list of
given expressions, any matching record's column values will be updated as
provided in
new_values_maps
. There is also an optional
'upsert' capability where if a particular predicate doesn't match any
existing record, then a new record can be inserted.
Note that this operation can only be run on an original table and not on a
collection or a result view.
This operation can update primary key values. By default only 'pure primary
key' predicates are allowed when updating primary key values. If the primary
key for a table is the column 'attr1', then the operation will only accept
predicates of the form: "attr1 == 'foo'" if the attr1 column is being
updated. For a composite primary key (e.g. columns 'attr1' and 'attr2')
then this operation will only accept predicates of the form: "(attr1 ==
'foo') and (attr2 == 'bar')". Meaning, all primary key columns must appear
in an equality predicate in the expressions. Furthermore each 'pure primary
key' predicate must be unique within a given request. These restrictions
can be removed by utilizing some available options through
options
.Note that this operation can only be run on an original
table and not on a collection or a result view.
The update_on_existing_pk
option specifies the record collision
policy for tables with a primary key, and is ignored on tables with no primary key.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
visualize_image_chart(table_name, x_column_names, y_column_names, min_x, max_x, min_y, max_y, width, height, bg_color, style_options, options, callback) → {Object}
Scatter plot is the only plot type currently supported. A non-numeric column
can be specified as x or y column and jitters can be added to them to avoid
excessive overlapping. All color values must be in the format RRGGBB or
AARRGGBB (to specify the alpha value).
The image is contained in the image_data
field.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table containing the data to be
drawn as a chart. |
x_column_names |
Array.<String>
|
Names of the columns containing the data
mapped to the x axis of a chart. |
y_column_names |
Array.<String>
|
Names of the columns containing the data
mapped to the y axis of a chart. |
min_x |
Number
|
Lower bound for the x column values. For non-numeric
x column, each x column item is mapped to an integral
value starting from 0. |
max_x |
Number
|
Upper bound for the x column values. For non-numeric
x column, each x column item is mapped to an integral
value starting from 0. |
min_y |
Number
|
Lower bound for the y column values. For non-numeric
y column, each y column item is mapped to an integral
value starting from 0. |
max_y |
Number
|
Upper bound for the y column values. For non-numeric
y column, each y column item is mapped to an integral
value starting from 0. |
width |
Number
|
Width of the generated image in pixels. |
height |
Number
|
Height of the generated image in pixels. |
bg_color |
String
|
Background color of the generated image. |
style_options |
Object
|
Rendering style options for a chart.
- 'pointcolor': The color of
points in the plot represented as a
hexadecimal number. The default value is
'0000FF'.
- 'pointsize': The size of points
in the plot represented as number of pixels.
The default value is '3'.
- 'pointshape': The shape of
points in the plot.
Supported values:
- 'none'
- 'circle'
- 'square'
- 'diamond'
- 'hollowcircle'
- 'hollowsquare'
- 'hollowdiamond'
The default value is 'square'.
- 'cb_pointcolors': Point color
class break information consisting of three
entries: class-break attribute, class-break
values/ranges, and point color values. This
option overrides the pointcolor option if
both are provided. Class-break ranges are
represented in the form of "min:max".
Class-break values/ranges and point color
values are separated by cb_delimiter, e.g.
{"price", "20:30;30:40;40:50",
"0xFF0000;0x00FF00;0x0000FF"}.
- 'cb_pointsizes': Point size
class break information consisting of three
entries: class-break attribute, class-break
values/ranges, and point size values. This
option overrides the pointsize option if both
are provided. Class-break ranges are
represented in the form of "min:max".
Class-break values/ranges and point size
values are separated by cb_delimiter, e.g.
{"states", "NY;TX;CA", "3;5;7"}.
- 'cb_pointshapes': Point shape
class break information consisting of three
entries: class-break attribute, class-break
values/ranges, and point shape names. This
option overrides the pointshape option if
both are provided. Class-break ranges are
represented in the form of "min:max".
Class-break values/ranges and point shape
names are separated by cb_delimiter, e.g.
{"states", "NY;TX;CA",
"circle;square;diamond"}.
- 'cb_delimiter': A character or
string which separates per-class values in a
class-break style option string. The default
value is ';'.
- 'x_order_by': An expression or
aggregate expression by which non-numeric x
column values are sorted, e.g. "avg(price)
descending".
- 'y_order_by': An expression or
aggregate expression by which non-numeric y
column values are sorted, e.g. "avg(price)",
which defaults to "avg(price) ascending".
- 'scale_type_x': Type of x axis
scale.
Supported values:
- 'none': No scale is applied to
the x axis.
- 'log': A base-10 log scale is
applied to the x axis.
The default value is 'none'.
- 'scale_type_y': Type of y axis
scale.
Supported values:
- 'none': No scale is applied to
the y axis.
- 'log': A base-10 log scale is
applied to the y axis.
The default value is 'none'.
- 'min_max_scaled': If this
options is set to "false", this endpoint
expects request's min/max values are not yet
scaled. They will be scaled according to
scale_type_x or scale_type_y for response. If
this options is set to "true", this endpoint
expects request's min/max values are already
scaled according to
scale_type_x/scale_type_y. Response's min/max
values will be equal to request's min/max
values. The default value is 'false'.
- 'jitter_x': Amplitude of
horizontal jitter applied to non-numeric x
column values. The default value is '0.0'.
- 'jitter_y': Amplitude of
vertical jitter applied to non-numeric y
column values. The default value is '0.0'.
- 'plot_all': If this options is
set to "true", all non-numeric column values
are plotted ignoring min_x, max_x, min_y and
max_y parameters. The default value is
'false'.
|
options |
Object
|
Optional parameters.
- 'image_encoding': Encoding to be
applied to the output image. When using JSON
serialization it is recommended to specify this as
base64 .
Supported values:
- 'base64': Apply base64 encoding to the
output image.
- 'none': Do not apply any additional
encoding to the output image.
The default value is 'none'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
visualize_image_chart_request(request, callback) → {Object}
Scatter plot is the only plot type currently supported. A non-numeric column
can be specified as x or y column and jitters can be added to them to avoid
excessive overlapping. All color values must be in the format RRGGBB or
AARRGGBB (to specify the alpha value).
The image is contained in the image_data
field.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
visualize_isochrone(graph_name, source_node, max_solution_radius, weights_on_edges, restrictions, num_levels, generate_image, levels_table, style_options, solve_options, contour_options, options, callback) → {Object}
Generate an image containing isolines for travel results using an existing
graph. Isolines represent curves of equal cost, with cost typically
referring to the time or distance assigned as the weights of the underlying
graph. See
Network Graphs & Solvers for more information on graphs.
.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph on which the isochrone is to
be computed. |
source_node |
String
|
Starting vertex on the underlying graph from/to
which the isochrones are created. |
max_solution_radius |
Number
|
Extent of the search radius around
source_node . Set to '-1.0'
for unrestricted search radius. |
weights_on_edges |
Array.<String>
|
Additional weights to apply to the edges
of an existing graph. Weights must be
specified using identifiers;
identifiers are grouped as combinations.
Identifiers can be used with existing
column names, e.g., 'table.column AS
WEIGHTS_EDGE_ID', or expressions, e.g.,
'ST_LENGTH(wkt) AS
WEIGHTS_VALUESPECIFIED'. Any provided
weights will be added (in the case of
'WEIGHTS_VALUESPECIFIED') to or
multiplied with (in the case of
'WEIGHTS_FACTORSPECIFIED') the existing
weight(s). |
restrictions |
Array.<String>
|
Additional restrictions to apply to the
nodes/edges of an existing graph.
Restrictions must be specified using identifiers; identifiers
are grouped as combinations. Identifiers
can be used with existing column names,
e.g., 'table.column AS
RESTRICTIONS_EDGE_ID', or expressions, e.g.,
'column/2 AS RESTRICTIONS_VALUECOMPARED'. If
remove_previous_restrictions is
set to true , any provided
restrictions will replace the existing
restrictions. If
remove_previous_restrictions is
set to false , any provided
restrictions will be added (in the case of
'RESTRICTIONS_VALUECOMPARED') to or replaced
(in the case of
'RESTRICTIONS_ONOFFCOMPARED'). |
num_levels |
Number
|
Number of equally-separated isochrones to
compute. |
generate_image |
Boolean
|
If set to true , generates a
PNG image of the isochrones in the
response.
Supported values:
The default value is true. |
levels_table |
String
|
Name of the table to output the isochrones,
containing levels and their corresponding WKT
geometry. If no value is provided, the table
is not generated. |
style_options |
Object
|
Various style related options of the
isochrone image.
- 'line_size': The width of the
contour lines in pixels. The default value
is '3'.
- 'color': Color of generated
isolines. All color values must be in the
format RRGGBB or AARRGGBB (to specify the
alpha value). If alpha is specified and
flooded contours are enabled, it will be used
for as the transparency of the latter. The
default value is 'FF696969'.
- 'bg_color': When
generate_image is set to
true , background color of the
generated image. All color values must be in
the format RRGGBB or AARRGGBB (to specify the
alpha value). The default value is
'00000000'.
- 'text_color': When
add_labels is set to
true , color for the labels. All
color values must be in the format RRGGBB or
AARRGGBB (to specify the alpha value). The
default value is 'FF000000'.
- 'colormap': Colormap for
contours or fill-in regions when applicable.
All color values must be in the format RRGGBB
or AARRGGBB (to specify the alpha value)
Supported values:
- 'jet'
- 'accent'
- 'afmhot'
- 'autumn'
- 'binary'
- 'blues'
- 'bone'
- 'brbg'
- 'brg'
- 'bugn'
- 'bupu'
- 'bwr'
- 'cmrmap'
- 'cool'
- 'coolwarm'
- 'copper'
- 'cubehelix'
- 'dark2'
- 'flag'
- 'gist_earth'
- 'gist_gray'
- 'gist_heat'
- 'gist_ncar'
- 'gist_rainbow'
- 'gist_stern'
- 'gist_yarg'
- 'gnbu'
- 'gnuplot2'
- 'gnuplot'
- 'gray'
- 'greens'
- 'greys'
- 'hot'
- 'hsv'
- 'inferno'
- 'magma'
- 'nipy_spectral'
- 'ocean'
- 'oranges'
- 'orrd'
- 'paired'
- 'pastel1'
- 'pastel2'
- 'pink'
- 'piyg'
- 'plasma'
- 'prgn'
- 'prism'
- 'pubu'
- 'pubugn'
- 'puor'
- 'purd'
- 'purples'
- 'rainbow'
- 'rdbu'
- 'rdgy'
- 'rdpu'
- 'rdylbu'
- 'rdylgn'
- 'reds'
- 'seismic'
- 'set1'
- 'set2'
- 'set3'
- 'spectral'
- 'spring'
- 'summer'
- 'terrain'
- 'viridis'
- 'winter'
- 'wistia'
- 'ylgn'
- 'ylgnbu'
- 'ylorbr'
- 'ylorrd'
The default value is 'jet'.
|
solve_options |
Object
|
Solver specific parameters
- 'remove_previous_restrictions':
Ignore the restrictions applied to the graph
during the creation stage and only use the
restrictions specified in this request if set
to
true .
Supported values:
The default value is 'false'.
- 'restriction_threshold_value':
Value-based restriction comparison. Any node
or edge with a 'RESTRICTIONS_VALUECOMPARED'
value greater than the
restriction_threshold_value will
not be included in the solution.
- 'uniform_weights': When
specified, assigns the given value to all the
edges in the graph. Note that weights
provided in
weights_on_edges
will override this value.
|
contour_options |
Object
|
Solver specific parameters
- 'projection': Spatial
Reference System (i.e. EPSG Code).
Supported values:
- '3857'
- '102100'
- '900913'
- 'EPSG:4326'
- 'PLATE_CARREE'
- 'EPSG:900913'
- 'EPSG:102100'
- 'EPSG:3857'
- 'WEB_MERCATOR'
The default value is 'PLATE_CARREE'.
- 'width': When
generate_image is set to
true , width of the generated
image. The default value is '512'.
- 'height': When
generate_image is set to
true , height of the generated
image. If the default value is used, the
height is set to the value
resulting from multiplying the aspect ratio
by the width . The default
value is '-1'.
- 'search_radius': When
interpolating the graph solution to
generate the isochrone, neighborhood of
influence of sample data (in percent of the
image/grid). The default value is '20'.
- 'grid_size': When
interpolating the graph solution to
generate the isochrone, number of
subdivisions along the x axis when building
the grid (the y is computed using the
aspect ratio of the output image). The
default value is '100'.
- 'color_isolines': Color each
isoline according to the colormap;
otherwise, use the foreground color.
Supported values:
The default value is 'true'.
- 'add_labels': If set to
true , add labels to the
isolines.
Supported values:
The default value is 'false'.
- 'labels_font_size': When
add_labels is set to
true , size of the font (in
pixels) to use for labels. The default
value is '12'.
- 'labels_font_family': When
add_labels is set to
true , font name to be used
when adding labels. The default value is
'arial'.
- 'labels_search_window': When
add_labels is set to
true , a search window is used
to rate the local quality of each isoline.
Smooth, continuous, long stretches with
relatively flat angles are favored. The
provided value is multiplied by the
labels_font_size to calculate
the final window size. The default value
is '4'.
-
'labels_intralevel_separation': When
add_labels is set to
true , this value determines
the distance (in multiples of the
labels_font_size ) to use when
separating labels of different values. The
default value is '4'.
-
'labels_interlevel_separation': When
add_labels is set to
true , this value determines
the distance (in percent of the total
window size) to use when separating labels
of the same value. The default value is
'20'.
- 'labels_max_angle': When
add_labels is set to
true , maximum angle (in
degrees) from the vertical to use when
adding labels. The default value is '60'.
|
options |
Object
|
Additional parameters
- 'solve_table': Name of the table to
host intermediate solve results containing the
position and cost for each vertex in the graph. If
the default value is used, a temporary table is
created and deleted once the solution is
calculated. The default value is ''.
- 'is_replicated': If set to
true , replicate the
solve_table .
Supported values:
The default value is 'true'.
- 'data_min_x': Lower bound for the x
values. If not provided, it will be computed from
the bounds of the input data.
- 'data_max_x': Upper bound for the x
values. If not provided, it will be computed from
the bounds of the input data.
- 'data_min_y': Lower bound for the y
values. If not provided, it will be computed from
the bounds of the input data.
- 'data_max_y': Upper bound for the y
values. If not provided, it will be computed from
the bounds of the input data.
- 'concavity_level': Factor to qualify
the concavity of the isochrone curves. The lower
the value, the more convex (with '0' being
completely convex and '1' being the most concave).
The default value is '0.5'.
- 'use_priority_queue_solvers': sets the
solver methods explicitly if true
Supported values:
- 'true': uses the solvers scheduled for
'shortest_path' and 'inverse_shortest_path' based
on solve_direction
- 'false': uses the solvers
'priority_queue' and 'inverse_priority_queue' based
on solve_direction
The default value is 'false'.
- 'solve_direction': Specify whether we
are going to the source node, or starting from it.
Supported values:
- 'from_source': Shortest path to get to
the source (inverse Dijkstra)
- 'to_source': Shortest path to source
(Dijkstra)
The default value is 'from_source'.
|
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object
visualize_isochrone_request(request, callback) → {Object}
Generate an image containing isolines for travel results using an existing
graph. Isolines represent curves of equal cost, with cost typically
referring to the time or distance assigned as the weights of the underlying
graph. See
Network Graphs & Solvers for more information on graphs.
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. If not
specified, request will be synchronous. |
- Source:
Returns:
Response object containing the method_codes of the
operation.
-
Type
-
Object