Class GPUdb¶
-
class
gpudb.
GPUdb
(host='127.0.0.1', port='9191', host_manager_port='9300', encoding='BINARY', connection='HTTP', username='', password='', timeout=None, no_init_db_contact=False, primary_host=None, skip_ssl_cert_verification=False, **kwargs)[source]¶ Bases:
object
Construct a new GPUdb client instance.
Parameters
- host (str or list of str) –
- The IP address of the GPUdb server. May be provided as a comma separated string or a list of strings to support HA. Also, can include the port following a colon (the port argument then should be unused). Host may take the form “https://user:password@domain.com:port/path/”. If only a single URL or host is given, and no primary_host is explicitly specified, then the given URL will be used as the primary URL.
- port (str or list of str) –
- The port of the GPUdb server at the given IP address. May be provided as a list in conjunction with host; but if using the same port for all hosts, then a single port value is OK. Also, can be omitted entirely if the host already contains the port. If the host does include a port, then this argument will be ignored.
- host_manager_port (str) –
- The port of the host manager for the GPUdb server at the given IP address. May be provided as a list in conjunction with host; but if using the same port for all hosts, then a single port value is OK.
- encoding (str) –
- Type of Avro encoding to use, “BINARY”, “JSON” or “SNAPPY”.
- connection (str) –
- Connection type, currently only “HTTP” or “HTTPS” supported. May be provided as a list in conjunction with host; but if using the same port for all hosts, then a single port value is OK.
- username (str) –
- An optional http username.
- password (str) –
- The http password for the username.
- timeout (int) –
- HTTP request timeout in seconds. Defaults to global socket timeout.
- no_init_db_contact (bool) –
- If True, the constructor won’t communicate with the database server (e.g. for checking version compatibility). Default is False.
- primary_host (str) –
- Optional parameter; if given, indicates that this host is the primary cluster in the HA ring and must always be attempted to be used first. Switching to another cluster should happen only if this cluster is unavailable. Must be given in the form of ‘http[s]://X.X.X.X:PORT[/httpd-name]’. If this is not given, then all available clusters will be treated with equal probability of use.
- skip_ssl_cert_verification (bool) –
- Applies to https connections only; will be ignored for http connections. If True, for https connections, will skip the verification of the SSL certificate sent by the server. Be careful about using this flag; please ensure that you fully understand the repurcussions of skipping this verification step. Default is False.
-
class
HASynchronicityMode
[source]¶ Bases:
enum34.Enum
Inner enumeration class to represent the high-availability synchronicity override mode that is applied to each eandpoint call. Available enumerations are: * DEFAULT – No override; defer to the HA process for synchronizing
endpoints (which has different logic for different endpoints). This is the default mode.- SYNCHRONOUS – Synchronize all endpoint calls
- ASYNCHRONOUS – Do NOT synchronize any endpoint call
-
GPUdb.
set_primary_host
(new_primary_host, start_using_new_primary_host=False, delete_old_primary_host=False)[source]¶ Set the primary host for this client. Start using this host per the user’s directions. Also, either delete any existing primary host information, or relegate it to the ranks of a backup host.
Parameters
- value (str) –
- A string containing the full URL of the new primary host (of the format ‘http[s]://X.X.X.X:PORT[/httpd-name]’). Must have valid URL format. May be part of the given back-up hosts, or be a completely new one.
- start_using_new_primary_host (bool) –
- Boolean flag indicating if the new primary host should be used starting immediately. Please be cautious about setting the value of this flag to True; there may be unintended consequences regarding query chaining. Caveat: if values given is False, but delete_old_primary_host is True and the old primary host, if any, was being used at the time of this function call, then the client still DOES switch over to the new primary host. Default value is False.
- delete_old_primary_host (bool) –
- Boolean flag indicating that if a primary host was already set, delete that information. If False, then any existing primary host URL would treated as a regular back-up cluster’s host. Default value is False.
-
GPUdb.
get_known_types
¶ Return all known types; if none, return None.
-
GPUdb.
get_known_type
(type_id, lookup_type=True)[source]¶ Given an type ID, return any associated known type; if none is found, then optionally try to look it up and save it. Otherwise, return None.
Parameters
- type_id (str) –
- The ID for the type.
- lookup_type (bool) –
- If True, then if the type is not already found, then
to look it up by invoking
show_types()
, save it for the future, and return it.
Returns
The associated RecordType, if found (or looked up) –
otherwise.
-
GPUdb.
END_OF_SET
= -9999¶ (int) Used for indicating that all of the records (till the end of the set are desired)–generally used for /get/records/* functions.
-
GPUdb.
encode_datum
(SCHEMA, datum, encoding=None)[source]¶ Returns an avro binary or JSON encoded dataum dict using its schema.
Parameters
- SCHEMA (str or avro.Schema) –
- A parsed schema object from avro.schema.parse() or a string containing the schema.
- datum (dict) –
- A dict of key-value pairs containing the data to encode (the entries must match the schema).
-
GPUdb.
encode_datum_cext
(SCHEMA, datum, encoding=None)[source]¶ Returns an avro binary or JSON encoded dataum dict using its schema.
Parameters
- SCHEMA (str or avro.Schema) –
- A parsed schema object from avro.schema.parse() or a string containing the schema.
- datum (dict) –
- A dict of key-value pairs containing the data to encode (the entries must match the schema).
-
GPUdb.
logger
(ranks, log_levels, options={})[source]¶ Convenience function to change log levels of some or all GPUdb ranks. Parameters
- ranks (list of ints) –
- A list containing the ranks to which to apply the new log levels.
- log_levels (dict of str to str) –
- A map where the keys dictacte which log’s levels to change, and the values dictate what the new log levels will be.
- options (dict of str to str) –
- Optional parameters. Default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- status (str) –
- The status of the endpoint (‘OK’ or ‘ERROR’).
log_levels (map of str to str)
-
GPUdb.
set_server_logger_level
(ranks, log_levels, options={})[source]¶ Convenience function to change log levels of some or all GPUdb ranks. Parameters
- ranks (list of ints) –
- A list containing the ranks to which to apply the new log levels.
- log_levels (dict of str to str) –
- A map where the keys dictacte which log’s levels to change, and the values dictate what the new log levels will be.
- options (dict of str to str) –
- Optional parameters. Default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- status (str) –
- The status of the endpoint (‘OK’ or ‘ERROR’).
log_levels (map of str to str)
-
GPUdb.
set_client_logger_level
(log_level)[source]¶ Set the log level for the client GPUdb class.
Parameters
- log_level (int, long, or str) –
- A valid log level for the logging module
-
GPUdb.
load_gpudb_schemas
()[source]¶ Saves all request and response schemas for GPUdb queries in a lookup table (lookup by query name).
-
GPUdb.
load_gpudb_func_to_endpoint_map
()[source]¶ Saves a mapping of rest endpoint function names to endpoints in a dictionary.
-
GPUdb.
admin_add_ranks
(hosts=None, config_params=None, options={})[source]¶ Add one or more new ranks to the Kinetica cluster. The new ranks will not contain any data initially, other than replicated tables, and not be assigned any shards. To rebalance data across the cluster, which includes shifting some shard key assignments to newly added ranks, see
admin_rebalance()
.For example, if attempting to add three new ranks (two ranks on host 172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with additional configuration parameters:
- input parameter hosts would be an array including 172.123.45.67 in
the first two indices (signifying two ranks being added to host 172.123.45.67) and 172.123.45.68 in the last index (signifying one rank being added to host 172.123.45.67)
- input parameter config_params would be an array of maps, with each
map corresponding to the ranks being added in input parameter hosts. The key of each map would be the configuration parameter name and the value would be the parameter’s value, e.g. ‘rank.gpu’:‘1’
This endpoint’s processing includes copying all replicated table data to the new rank(s) and therefore could take a long time. The API call may time out if run directly. It is recommended to run this endpoint asynchronously via
create_job()
.Parameters
- hosts (list of str) –
- The IP address of each rank being added to the cluster. Insert one entry per rank, even if they are on the same host. The order of the hosts in the array only matters as it relates to the input parameter config_params. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- config_params (list of dicts of str to str) –
- Configuration parameters to apply to the new ranks, e.g., which GPU to use. Configuration parameters that start with ‘rankN.’, where N is the rank number, should omit the N, as the new rank number(s) are not allocated until the ranks are created. Each entry in this array corresponds to the entry at the same array index in the input parameter hosts. This array must either be completely empty or have the same number of elements as the hosts array. An empty array will result in the new ranks being set only with default parameters. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
dry_run – If true, only validation checks will be performed. No ranks are added. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- added_ranks (list of ints) –
- The number assigned to each newly added rank, in the same order as the ranks in the input parameter hosts. Will be empty if the operation fails.
- results (list of str) –
- Text description of the result of each rank being added. Indicates the reason for any errors that occur. Entries are in the same order as the input parameter hosts.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
admin_alter_jobs
(job_ids=None, action=None, options={})[source]¶ Perform the requested action on a list of one or more job(s). Based on the type of job and the current state of execution, the action may not be successfully executed. The final result of the attempted actions for each specified job is returned in the status array of the response. See Job Manager for more information.
Parameters
- job_ids (list of longs) –
- Jobs to be modified. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- action (str) –
Action to be performed on the jobs specified by job_ids. Allowed values are:
- cancel
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- job_ids (list of longs) –
- Jobs on which the action was performed.
- action (str) –
- Action requested on the jobs.
- status (list of str) –
- Status of the requested action for each job.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
admin_offline
(offline=None, options={})[source]¶ Take the system offline. When the system is offline, no user operations can be performed with the exception of a system shutdown.
Parameters
- offline (bool) –
Set to true if desired state is offline. Allowed values are:
- true
- false
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- flush_to_disk –
Flush to disk when going offline
Allowed values are:
- true
- false
- flush_to_disk –
Flush to disk when going offline
Allowed values are:
Returns
A dict with the following entries–
- is_offline (bool) –
- Returns true if the system is offline, or false otherwise.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
admin_rebalance
(options={})[source]¶ Rebalance the cluster so that all the nodes contain approximately an equal number of records. The rebalance will also cause the shards to be equally distributed (as much as possible) across all the ranks.
This endpoint may take a long time to run, depending on the amount of data in the system. The API call may time out if run directly. It is recommended to run this endpoint asynchronously via
create_job()
.Parameters
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
rebalance_sharded_data – If true, sharded data will be rebalanced approximately equally across the cluster. Note that for big clusters, this data transfer could be time consuming and result in delayed query responses. Allowed values are:
- true
- false
The default value is ‘true’.
rebalance_unsharded_data – If true, unsharded data (data without primary keys and without shard keys) will be rebalanced approximately equally across the cluster. Note that for big clusters, this data transfer could be time consuming and result in delayed query responses. Allowed values are:
- true
- false
The default value is ‘true’.
table_whitelist – Comma-separated list of unsharded table names to rebalance. Not applicable to sharded tables because they are always balanced in accordance with their primary key or shard key. Cannot be used simultaneously with table_blacklist.
table_blacklist – Comma-separated list of unsharded table names to not rebalance. Not applicable to sharded tables because they are always balanced in accordance with their primary key or shard key. Cannot be used simultaneously with table_whitelist.
aggressiveness – Influences how much data to send per rebalance round. A higher aggressiveness setting will complete the rebalance faster. A lower aggressiveness setting will take longer, but allow for better interleaving between the rebalance and other queries. Allowed values are 1 through 10. The default value is ‘1’.
compact_after_rebalance – Perform compaction of deleted records once the rebalance completes, to reclaim memory and disk space. Default is true, unless repair_incorrectly_sharded_data is set to true. Allowed values are:
- true
- false
The default value is ‘true’.
compact_only – Only perform compaction, do not rebalance. Default is false. Allowed values are:
- true
- false
The default value is ‘false’.
repair_incorrectly_sharded_data – Scans for any data sharded incorrectly and re-routes the correct location. This can be done as part of a typical rebalance after expanding the cluster, or in a standalone fashion when it is believed that data is sharded incorrectly somewhere in the cluster. Compaction will not be performed by default when this is enabled. This option may also lengthen rebalance time, and increase the memory used by the rebalance. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- info (dict of str to str) –
- Additional information.
-
GPUdb.
admin_remove_ranks
(ranks=None, options={})[source]¶ Remove one or more ranks from the cluster. All data in the ranks to be removed is rebalanced to other ranks before the node is removed unless the rebalance_sharded_data or rebalance_unsharded_data parameters are set to false in the input parameter options.
Due to the rebalancing, this endpoint may take a long time to run, depending on the amount of data in the system. The API call may time out if run directly. It is recommended to run this endpoint asynchronously via
create_job()
.Parameters
- ranks (list of ints) –
- Rank numbers of the ranks to be removed from the cluster. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
rebalance_sharded_data – When true, data with primary keys or shard keys will be rebalanced to other ranks prior to rank removal. Note that for big clusters, this data transfer could be time consuming and result in delayed query responses. Allowed values are:
- true
- false
The default value is ‘true’.
rebalance_unsharded_data – When true, unsharded data (data without primary keys and without shard keys) will be rebalanced to other ranks prior to rank removal. Note that for big clusters, this data transfer could be time consuming and result in delayed query responses. Allowed values are:
- true
- false
The default value is ‘true’.
aggressiveness – Influences how much data to send per rebalance round, during the rebalance portion of removing ranks. A higher aggressiveness setting will complete the rebalance faster. A lower aggressiveness setting will take longer, but allow for better interleaving between the rebalance and other queries. Allowed values are 1 through 10. The default value is ‘1’.
Returns
A dict with the following entries–
- removed_ranks (list of ints) –
- Ranks that were removed from the cluster. May be empty in the case of failures.
- results (list of str) –
- Text description of the result of each rank being removed. Indicates the reason for any errors that occur. Entries are in the same order as the input parameter ranks.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
admin_show_alerts
(num_alerts=None, options={})[source]¶ Requests a list of the most recent alerts. Returns lists of alert data, including timestamp and type.
Parameters
- num_alerts (int) –
- Number of most recent alerts to request. The response will include up to input parameter num_alerts depending on how many alerts there are in the system. A value of 0 returns all stored alerts.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- timestamps (list of str) –
- Timestamp for when the alert occurred, sorted from most recent to least recent. Each array entry corresponds with the entries at the same index in output parameter types and output parameter params.
- types (list of str) –
- Type of system alert, sorted from most recent to least recent. Each array entry corresponds with the entries at the same index in output parameter timestamps and output parameter params.
- params (list of dicts of str to str) –
- Parameters for each alert, sorted from most recent to least recent. Each array entry corresponds with the entries at the same index in output parameter timestamps and output parameter types.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
admin_show_cluster_operations
(history_index=0, options={})[source]¶ Requests the detailed status of the current operation (by default) or a prior cluster operation specified by input parameter history_index. Returns details on the requested cluster operation.
The response will also indicate how many cluster operations are stored in the history.
Parameters
- history_index (int) –
- Indicates which cluster operation to retrieve. Use 0 for the most recent. The default value is 0.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- history_index (int) –
- The index of this cluster operation in the reverse-chronologically sorted list of operations, where 0 is the most recent operation.
- history_size (int) –
- Number of cluster operations executed to date.
- in_progress (bool) –
Whether this cluster operation is currently in progress or not. Allowed values are:
- true
- false
- start_time (str) –
- The start time of the cluster operation.
- end_time (str) –
- The end time of the cluster operation, if completed.
- endpoint (str) –
- The endpoint that initiated the cluster operation.
- endpoint_schema (str) –
- The schema for the original request.
- overall_status (str) –
Overall success status of the operation. Allowed values are:
- OK – The operation was successful, or, if still in progress, the operation is successful so far.
- ERROR – An error occurred executing the operation.
- user_stopped (bool) –
Whether a user stopped this operation at any point while in progress. Allowed values are:
- true
- false
- percent_complete (int) –
- Percent complete of this entire operation.
- dry_run (bool) –
Whether this operation was a dry run. Allowed values are:
- true
- false
- messages (list of str) –
- Updates and error messages if any.
- add_ranks (bool) –
Whether adding ranks is (or was) part of this operation. Allowed values are:
- true
- false
- add_ranks_status (str) –
If this was a rank-adding operation, the add-specific status of the operation. Allowed values are:
- NOT_STARTED
- IN_PROGRESS
- INTERRUPTED
- COMPLETED_OK
- ERROR
- ranks_being_added (list of ints) –
- The rank numbers of the ranks currently being added, or the rank numbers that were added if the operation is complete.
- rank_hosts (list of str) –
- The host IP addresses of the ranks being added, in the same order as the output parameter ranks_being_added list.
- add_ranks_percent_complete (int) –
- Current percent complete of the add ranks operation.
- remove_ranks (bool) –
Whether removing ranks is (or was) part of this operation. Allowed values are:
- true
- false
- remove_ranks_status (str) –
If this was a rank-removing operation, the removal-specific status of the operation. Allowed values are:
- NOT_STARTED
- IN_PROGRESS
- INTERRUPTED
- COMPLETED_OK
- ERROR
- ranks_being_removed (list of ints) –
- The ranks being removed, or that have been removed if the operation is completed.
- remove_ranks_percent_complete (int) –
- Current percent complete of the remove ranks operation.
- rebalance (bool) –
Whether data and/or shard rebalancing is (or was) part of this operation. Allowed values are:
- true
- false
- rebalance_unsharded_data (bool) –
Whether rebalancing of unsharded data is (or was) part of this operation. Allowed values are:
- true
- false
- rebalance_unsharded_data_status (str) –
If this was an operation that included rebalancing unsharded data, the rebalancing-specific status of the operation. Allowed values are:
- NOT_STARTED
- IN_PROGRESS
- INTERRUPTED
- COMPLETED_OK
- ERROR
- unsharded_rebalance_percent_complete (int) –
- Percentage of unsharded tables that completed rebalancing, out of all unsharded tables to rebalance.
- rebalance_sharded_data (bool) –
Whether rebalancing of sharded data is (or was) part of this operation. Allowed values are:
- true
- false
- shard_array_version (long) –
- Version of the shard array that is (or was) being rebalanced to. Each change to the shard array results in the version number incrementing.
- rebalance_sharded_data_status (str) –
If this was an operation that included rebalancing sharded data, the rebalancing-specific status of the operation. Allowed values are:
- NOT_STARTED
- IN_PROGRESS
- INTERRUPTED
- COMPLETED_OK
- ERROR
- num_shards_changing (int) –
- Number of shards that will change as part of rebalance.
- sharded_rebalance_percent_complete (int) –
- Percentage of shard keys, and their associated data if applicable, that have completed rebalancing.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
admin_show_jobs
(options={})[source]¶ Get a list of the current jobs in GPUdb.
Parameters
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
show_async_jobs – If true, then the completed async jobs are also included in the response. By default, once the async jobs are completed they are no longer included in the jobs list. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
job_id (list of longs)
status (list of str)
endpoint_name (list of str)
time_received (list of longs)
auth_id (list of str)
source_ip (list of str)
user_data (list of str)
- info (dict of str to str) –
- Additional information.
-
GPUdb.
admin_show_shards
(options={})[source]¶ Show the mapping of shards to the corresponding rank and tom. The response message contains list of 16384 (total number of shards in the system) Rank and TOM numbers corresponding to each shard.
Parameters
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- version (long) –
- Current shard array version number.
- rank (list of ints) –
- Array of ranks indexed by the shard number.
- tom (list of ints) –
- Array of toms to which the corresponding shard belongs.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
admin_shutdown
(exit_type=None, authorization=None, options={})[source]¶ Exits the database server application.
Parameters
- exit_type (str) –
- Reserved for future use. User can pass an empty string.
- authorization (str) –
- No longer used. User can pass an empty string.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- exit_status (str) –
- ‘OK’ upon (right before) successful exit.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
admin_verify_db
(options={})[source]¶ Verify database is in a consistent state. When inconsistencies or errors are found, the verified_ok flag in the response is set to false and the list of errors found is provided in the error_list.
Parameters
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
verify_nulls – When enabled, verifies that null values are set to zero. Allowed values are:
- true
- false
The default value is ‘false’.
concurrent_safe – When enabled, allows this endpoint to be run safely with other concurrent database operations. Other operations may be slower while this is running. Allowed values are:
- true
- false
The default value is ‘true’.
verify_rank0 – When enabled, compares rank0 table meta-data against workers meta-data. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- verified_ok (bool) –
- True if no errors were found, false otherwise. The default value is False.
- error_list (list of str) –
- List of errors found while validating the database internal state. The default value is an empty list ( [] ).
- info (dict of str to str) –
- Additional information.
-
GPUdb.
aggregate_convex_hull
(table_name=None, x_column_name=None, y_column_name=None, options={})[source]¶ Calculates and returns the convex hull for the values in a table specified by input parameter table_name.
Parameters
- table_name (str) –
- Name of table on which the operation will be performed. Must be an existing table. It cannot be a collection.
- x_column_name (str) –
- Name of the column containing the x coordinates of the points for the operation being performed.
- y_column_name (str) –
- Name of the column containing the y coordinates of the points for the operation being performed.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- x_vector (list of floats) –
- Array of x coordinates of the resulting convex set.
- y_vector (list of floats) –
- Array of y coordinates of the resulting convex set.
- count (int) –
- Count of the number of points in the convex set.
is_valid (bool)
- info (dict of str to str) –
- Additional information.
-
GPUdb.
aggregate_group_by
(table_name=None, column_names=None, offset=0, limit=-9999, encoding='binary', options={})[source]¶ Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination. This is somewhat analogous to an SQL-style SELECT...GROUP BY.
For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.
The results can be paged via the input parameter offset and input parameter limit parameters. For example, to get 10 groups with the largest counts the inputs would be: limit=10, options={“sort_order”:”descending”, “sort_by”:”value”}.
Input parameter options can be used to customize behavior of this call e.g. filtering or sorting the results.
To group by columns ‘x’ and ‘y’ and compute the number of objects within each group, use: column_names=[‘x’,’y’,’count(*)’].
To also compute the sum of ‘z’ over each group, use: column_names=[‘x’,’y’,’count(*)’,’sum(z)’].
Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to having.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a result_table name is specified in the input parameter options, the results are stored in a new table with that name–no results are returned in the response. Both the table name and resulting column names must adhere to standard naming conventions; column/aggregation expressions will need to be aliased. If the source table’s shard key is used as the grouping column(s) and all result records are selected (input parameter offset is 0 and input parameter limit is -9999), the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available when any of the values of input parameter column_names is an unrestricted-length string.
Parameters
- table_name (str) –
- Name of an existing table or view on which the operation will be performed.
- column_names (list of str) –
- List of one or more column names, expressions, and aggregate expressions. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the max number of results should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use output parameter has_more_records to see if more records exist in the result to be fetched, and input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records. Allowed values are:
- binary – Indicates that the returned records should be binary encoded.
- json – Indicates that the returned records should be json encoded.
The default value is ‘binary’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the table specified in result_table. If the collection provided is non-existent, the collection will be automatically created. If empty, then the table will be a top-level table.
expression – Filter expression to apply to the table prior to computing the aggregate group by.
having – Filter expression to apply to the aggregated results.
sort_order – String indicating how the returned values should be sorted - ascending or descending. Allowed values are:
- ascending – Indicates that the returned values should be sorted in ascending order.
- descending – Indicates that the returned values should be sorted in descending order.
The default value is ‘ascending’.
sort_by – String determining how the results are sorted. Allowed values are:
- key – Indicates that the returned values should be sorted by key, which corresponds to the grouping columns. If you have multiple grouping columns (and are sorting by key), it will first sort the first grouping column, then the second grouping column, etc.
- value – Indicates that the returned values should be sorted by value, which corresponds to the aggregates. If you have multiple aggregates (and are sorting by value), it will first sort by the first aggregate, then the second aggregate, etc.
The default value is ‘value’.
result_table – The name of the table used to store the results. Has the same naming restrictions as tables. Column names (group-by and aggregate fields) need to be given aliases e.g. [“FChar256 as fchar256”, “sum(FDouble) as sfd”]. If present, no results are returned in the response. This option is not available if one of the grouping attributes is an unrestricted string (i.e.; not charN) type.
result_table_persist – If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. Allowed values are:
- true
- false
The default value is ‘false’.
result_table_force_replicated – Force the result table to be replicated (ignores any sharding). Must be used in combination with the result_table option. Allowed values are:
- true
- false
The default value is ‘false’.
result_table_generate_pk – If true then set a primary key for the result table. Must be used in combination with the result_table option. Allowed values are:
- true
- false
The default value is ‘false’.
ttl – Sets the TTL of the table specified in result_table.
chunk_size – Indicates the number of records per chunk to be used for the result table. Must be used in combination with the result_table option.
create_indexes – Comma-separated list of columns on which to create indexes on the result table. Must be used in combination with the result_table option.
view_id – ID of view of which the result table will be a member. The default value is ‘’.
materialize_on_gpu – No longer used. See Resource Management Concepts for information about how resources are managed, Tier Strategy Concepts for how resources are targeted for VRAM, and Tier Strategy Usage for how to specify a table’s priority in VRAM. Allowed values are:
- true
- false
The default value is ‘false’.
pivot – pivot column
pivot_values – The value list provided will become the column headers in the output. Should be the values from the pivot_column.
grouping_sets – Customize the grouping attribute sets to compute the aggregates. These sets can include ROLLUP or CUBE operartors. The attribute sets should be enclosed in paranthesis and can include composite attributes. All attributes specified in the grouping sets must present in the groupby attributes.
rollup – This option is used to specify the multilevel aggregates.
cube – This option is used to specify the multidimensional aggregates.
Returns
A dict with the following entries–
- response_schema_str (str) –
- Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
- binary_encoded_response (str) –
- Avro binary encoded response.
- json_encoded_response (str) –
- Avro JSON encoded response.
- total_number_of_records (long) –
- Total/Filtered number of records.
- has_more_records (bool) –
- Too many records. Returned a partial set.
- info (dict of str to str) –
- Additional information.
- record_type (
RecordType
or None) – - A
RecordType
object using which the user can decode the binarydata by usingGPUdbRecord.decode_binary_data()
. If JSON encodingis used, then None.
-
GPUdb.
aggregate_group_by_and_decode
(table_name=None, column_names=None, offset=0, limit=-9999, encoding='binary', options={}, record_type=None, force_primitive_return_types=True, get_column_major=True)[source]¶ Calculates unique combinations (groups) of values for the given columns in a given table or view and computes aggregates on each unique combination. This is somewhat analogous to an SQL-style SELECT...GROUP BY.
For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.
The results can be paged via the input parameter offset and input parameter limit parameters. For example, to get 10 groups with the largest counts the inputs would be: limit=10, options={“sort_order”:”descending”, “sort_by”:”value”}.
Input parameter options can be used to customize behavior of this call e.g. filtering or sorting the results.
To group by columns ‘x’ and ‘y’ and compute the number of objects within each group, use: column_names=[‘x’,’y’,’count(*)’].
To also compute the sum of ‘z’ over each group, use: column_names=[‘x’,’y’,’count(*)’,’sum(z)’].
Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to having.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a result_table name is specified in the input parameter options, the results are stored in a new table with that name–no results are returned in the response. Both the table name and resulting column names must adhere to standard naming conventions; column/aggregation expressions will need to be aliased. If the source table’s shard key is used as the grouping column(s) and all result records are selected (input parameter offset is 0 and input parameter limit is -9999), the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available when any of the values of input parameter column_names is an unrestricted-length string.
Parameters
- table_name (str) –
- Name of an existing table or view on which the operation will be performed.
- column_names (list of str) –
- List of one or more column names, expressions, and aggregate expressions. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the max number of results should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use output parameter has_more_records to see if more records exist in the result to be fetched, and input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records. Allowed values are:
- binary – Indicates that the returned records should be binary encoded.
- json – Indicates that the returned records should be json encoded.
The default value is ‘binary’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the table specified in result_table. If the collection provided is non-existent, the collection will be automatically created. If empty, then the table will be a top-level table.
expression – Filter expression to apply to the table prior to computing the aggregate group by.
having – Filter expression to apply to the aggregated results.
sort_order – String indicating how the returned values should be sorted - ascending or descending. Allowed values are:
- ascending – Indicates that the returned values should be sorted in ascending order.
- descending – Indicates that the returned values should be sorted in descending order.
The default value is ‘ascending’.
sort_by – String determining how the results are sorted. Allowed values are:
- key – Indicates that the returned values should be sorted by key, which corresponds to the grouping columns. If you have multiple grouping columns (and are sorting by key), it will first sort the first grouping column, then the second grouping column, etc.
- value – Indicates that the returned values should be sorted by value, which corresponds to the aggregates. If you have multiple aggregates (and are sorting by value), it will first sort by the first aggregate, then the second aggregate, etc.
The default value is ‘value’.
result_table – The name of the table used to store the results. Has the same naming restrictions as tables. Column names (group-by and aggregate fields) need to be given aliases e.g. [“FChar256 as fchar256”, “sum(FDouble) as sfd”]. If present, no results are returned in the response. This option is not available if one of the grouping attributes is an unrestricted string (i.e.; not charN) type.
result_table_persist – If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. Allowed values are:
- true
- false
The default value is ‘false’.
result_table_force_replicated – Force the result table to be replicated (ignores any sharding). Must be used in combination with the result_table option. Allowed values are:
- true
- false
The default value is ‘false’.
result_table_generate_pk – If true then set a primary key for the result table. Must be used in combination with the result_table option. Allowed values are:
- true
- false
The default value is ‘false’.
ttl – Sets the TTL of the table specified in result_table.
chunk_size – Indicates the number of records per chunk to be used for the result table. Must be used in combination with the result_table option.
create_indexes – Comma-separated list of columns on which to create indexes on the result table. Must be used in combination with the result_table option.
view_id – ID of view of which the result table will be a member. The default value is ‘’.
materialize_on_gpu – No longer used. See Resource Management Concepts for information about how resources are managed, Tier Strategy Concepts for how resources are targeted for VRAM, and Tier Strategy Usage for how to specify a table’s priority in VRAM. Allowed values are:
- true
- false
The default value is ‘false’.
pivot – pivot column
pivot_values – The value list provided will become the column headers in the output. Should be the values from the pivot_column.
grouping_sets – Customize the grouping attribute sets to compute the aggregates. These sets can include ROLLUP or CUBE operartors. The attribute sets should be enclosed in paranthesis and can include composite attributes. All attributes specified in the grouping sets must present in the groupby attributes.
rollup – This option is used to specify the multilevel aggregates.
cube – This option is used to specify the multidimensional aggregates.
- record_type (
RecordType
or None) – - The record type expected in the results, or None to determinethe appropriate type automatically. If known, providing thismay improve performance in binary mode. Not used in JSON mode.The default value is None.
- force_primitive_return_types (bool) –
- If True, then OrderedDict objects will be returned, where
string sub-type columns will have their values converted back
to strings; for example, the Python datetime structs, used
for datetime type columns would have their values returned as
strings. If False, then
Record
objects will be returned, which for string sub-types, will return native or custom structs; no conversion to string takes place. String conversions, when returning OrderedDicts, incur a speed penalty, and it is strongly recommended to use theRecord
object option instead. If True, but none of the returned columns require a conversion, then the originalRecord
objects will be returned. Default value is True. - get_column_major (bool) –
- Indicates if the decoded records will be transposed to be column-major or returned as is (row-major). Default value is True.
Returns
A dict with the following entries–
- response_schema_str (str) –
- Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
- total_number_of_records (long) –
- Total/Filtered number of records.
- has_more_records (bool) –
- Too many records. Returned a partial set.
- info (dict of str to str) –
- Additional information.
- records (list of
Record
) – - A list of
Record
objects which contain the decoded records.
-
GPUdb.
aggregate_histogram
(table_name=None, column_name=None, start=None, end=None, interval=None, options={})[source]¶ Performs a histogram calculation given a table, a column, and an interval function. The input parameter interval is used to produce bins of that size and the result, computed over the records falling within each bin, is returned. For each bin, the start value is inclusive, but the end value is exclusive–except for the very last bin for which the end value is also inclusive. The value returned for each bin is the number of records in it, except when a column name is provided as a value_column. In this latter case the sum of the values corresponding to the value_column is used as the result instead. The total number of bins requested cannot exceed 10,000.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service a request that specifies a value_column option.
Parameters
- table_name (str) –
- Name of the table on which the operation will be performed. Must be an existing table or collection.
- column_name (str) –
- Name of a column or an expression of one or more column names over which the histogram will be calculated.
- start (float) –
- Lower end value of the histogram interval, inclusive.
- end (float) –
- Upper end value of the histogram interval, inclusive.
- interval (float) –
- The size of each bin within the start and end parameters.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- value_column – The name of the column to use when calculating the bin values (values are summed). The column must be a numerical type (int, double, long, float).
Returns
A dict with the following entries–
- counts (list of floats) –
- The array of calculated values that represents the histogram data points.
- start (float) –
- Value of input parameter start.
- end (float) –
- Value of input parameter end.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
aggregate_k_means
(table_name=None, column_names=None, k=None, tolerance=None, options={})[source]¶ This endpoint runs the k-means algorithm - a heuristic algorithm that attempts to do k-means clustering. An ideal k-means clustering algorithm selects k points such that the sum of the mean squared distances of each member of the set to the nearest of the k points is minimized. The k-means algorithm however does not necessarily produce such an ideal cluster. It begins with a randomly selected set of k points and then refines the location of the points iteratively and settles to a local minimum. Various parameters and options are provided to control the heuristic search.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.
Parameters
- table_name (str) –
- Name of the table on which the operation will be performed. Must be an existing table or collection.
- column_names (list of str) –
- List of column names on which the operation would be performed. If n columns are provided then each of the k result points will have n dimensions corresponding to the n columns. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- k (int) –
- The number of mean points to be determined by the algorithm.
- tolerance (float) –
- Stop iterating when the distances between successive points is less than the given tolerance.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- whiten – When set to 1 each of the columns is first normalized by its stdv - default is not to whiten.
- max_iters – Number of times to try to hit the tolerance limit before giving up - default is 10.
- num_tries – Number of times to run the k-means algorithm with a different randomly selected starting points - helps avoid local minimum. Default is 1.
Returns
A dict with the following entries–
- means (list of lists of floats) –
- The k-mean values found.
- counts (list of longs) –
- The number of elements in the cluster closest the corresponding k-means values.
- rms_dists (list of floats) –
- The root mean squared distance of the elements in the cluster for each of the k-means values.
- count (long) –
- The total count of all the clusters - will be the size of the input table.
- rms_dist (float) –
- The sum of all the rms_dists - the value the k-means algorithm is attempting to minimize.
- tolerance (float) –
- The distance between the last two iterations of the algorithm before it quit.
- num_iters (int) –
- The number of iterations the algorithm executed before it quit.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
aggregate_min_max
(table_name=None, column_name=None, options={})[source]¶ Calculates and returns the minimum and maximum values of a particular column in a table.
Parameters
- table_name (str) –
- Name of the table on which the operation will be performed. Must be an existing table.
- column_name (str) –
- Name of a column or an expression of one or more column on which the min-max will be calculated.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- min (float) –
- Minimum value of the input parameter column_name.
- max (float) –
- Maximum value of the input parameter column_name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
aggregate_min_max_geometry
(table_name=None, column_name=None, options={})[source]¶ Calculates and returns the minimum and maximum x- and y-coordinates of a particular geospatial geometry column in a table.
Parameters
- table_name (str) –
- Name of the table on which the operation will be performed. Must be an existing table.
- column_name (str) –
- Name of a geospatial geometry column on which the min-max will be calculated.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- min_x (float) –
- Minimum x-coordinate value of the input parameter column_name.
- max_x (float) –
- Maximum x-coordinate value of the input parameter column_name.
- min_y (float) –
- Minimum y-coordinate value of the input parameter column_name.
- max_y (float) –
- Maximum y-coordinate value of the input parameter column_name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
aggregate_statistics
(table_name=None, column_name=None, stats=None, options={})[source]¶ Calculates the requested statistics of the given column(s) in a given table.
The available statistics are count (number of total objects), mean, stdv (standard deviation), variance, skew, kurtosis, sum, min, max, weighted_average, cardinality (unique count), estimated_cardinality, percentile and percentile_rank.
Estimated cardinality is calculated by using the hyperloglog approximation technique.
Percentiles and percentile ranks are approximate and are calculated using the t-digest algorithm. They must include the desired percentile/percentile_rank. To compute multiple percentiles each value must be specified separately (i.e. ‘percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)’).
A second, comma-separated value can be added to the percentile statistic to calculate percentile resolution, e.g., a 50th percentile with 200 resolution would be ‘percentile(50,200)’.
The weighted average statistic requires a weight_column_name to be specified in input parameter options. The weighted average is then defined as the sum of the products of input parameter column_name times the weight_column_name values divided by the sum of the weight_column_name values.
Additional columns can be used in the calculation of statistics via the additional_column_names option. Values in these columns will be included in the overall aggregate calculation–individual aggregates will not be calculated per additional column. For instance, requesting the count & mean of input parameter column_name x and additional_column_names y & z, where x holds the numbers 1-10, y holds 11-20, and z holds 21-30, would return the total number of x, y, & z values (30), and the single average value across all x, y, & z values (15.5).
The response includes a list of key/value pairs of each statistic requested and its corresponding value.
Parameters
- table_name (str) –
- Name of the table on which the statistics operation will be performed.
- column_name (str) –
- Name of the primary column for which the statistics are to be calculated.
- stats (str) –
Comma separated list of the statistics to calculate, e.g. “sum,mean”. Allowed values are:
- count – Number of objects (independent of the given column(s)).
- mean – Arithmetic mean (average), equivalent to sum/count.
- stdv – Sample standard deviation (denominator is count-1).
- variance – Unbiased sample variance (denominator is count-1).
- skew – Skewness (third standardized moment).
- kurtosis – Kurtosis (fourth standardized moment).
- sum – Sum of all values in the column(s).
- min – Minimum value of the column(s).
- max – Maximum value of the column(s).
- weighted_average – Weighted arithmetic mean (using the option weight_column_name as the weighting column).
- cardinality – Number of unique values in the column(s).
- estimated_cardinality – Estimate (via hyperloglog technique) of the number of unique values in the column(s).
- percentile – Estimate (via t-digest) of the given percentile of the column(s) (percentile(50.0) will be an approximation of the median). Add a second, comma-separated value to calculate percentile resolution, e.g., ‘percentile(75,150)’
- percentile_rank – Estimate (via t-digest) of the percentile rank of the given value in the column(s) (if the given value is the median of the column(s), percentile_rank(<median>) will return approximately 50.0).
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- additional_column_names – A list of comma separated column names over which statistics can be accumulated along with the primary column. All columns listed and input parameter column_name must be of the same type. Must not include the column specified in input parameter column_name and no column can be listed twice.
- weight_column_name – Name of column used as weighting attribute for the weighted average statistic.
Returns
A dict with the following entries–
- stats (dict of str to floats) –
- (statistic name, double value) pairs of the requested statistics, including the total count by default.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
aggregate_statistics_by_range
(table_name=None, select_expression='', column_name=None, value_column_name=None, stats=None, start=None, end=None, interval=None, options={})[source]¶ Divides the given set into bins and calculates statistics of the values of a value-column in each bin. The bins are based on the values of a given binning-column. The statistics that may be requested are mean, stdv (standard deviation), variance, skew, kurtosis, sum, min, max, first, last and weighted average. In addition to the requested statistics the count of total samples in each bin is returned. This counts vector is just the histogram of the column used to divide the set members into bins. The weighted average statistic requires a weight_column to be specified in input parameter options. The weighted average is then defined as the sum of the products of the value column times the weight column divided by the sum of the weight column.
There are two methods for binning the set members. In the first, which can be used for numeric valued binning-columns, a min, max and interval are specified. The number of bins, nbins, is the integer upper bound of (max-min)/interval. Values that fall in the range [min+n*interval,min+(n+1)*interval) are placed in the nth bin where n ranges from 0..nbin-2. The final bin is [min+(nbin-1)*interval,max]. In the second method, input parameter options bin_values specifies a list of binning column values. Binning-columns whose value matches the nth member of the bin_values list are placed in the nth bin. When a list is provided the binning-column must be of type string or int.
NOTE: The Kinetica instance being accessed must be running a CUDA (GPU-based) build to service this request.
Parameters
- table_name (str) –
- Name of the table on which the ranged-statistics operation will be performed.
- select_expression (str) –
- For a non-empty expression statistics are calculated for those records for which the expression is true. The default value is ‘’.
- column_name (str) –
- Name of the binning-column used to divide the set samples into bins.
- value_column_name (str) –
- Name of the value-column for which statistics are to be computed.
- stats (str) –
- A string of comma separated list of the statistics to calculate, e.g. ‘sum,mean’. Available statistics: mean, stdv (standard deviation), variance, skew, kurtosis, sum.
- start (float) –
- The lower bound of the binning-column.
- end (float) –
- The upper bound of the binning-column.
- interval (float) –
- The interval of a bin. Set members fall into bin i if the binning-column falls in the range [start+interval*i, start+interval*(i+1)).
- options (dict of str to str) –
Map of optional parameters:. The default value is an empty dict ( {} ). Allowed keys are:
- additional_column_names – A list of comma separated value-column names over which statistics can be accumulated along with the primary value_column.
- bin_values – A list of comma separated binning-column values. Values that match the nth bin_values value are placed in the nth bin.
- weight_column_name – Name of the column used as weighting column for the weighted_average statistic.
- order_column_name – Name of the column used for candlestick charting techniques.
Returns
A dict with the following entries–
- stats (dict of str to lists of floats) –
- A map with a key for each statistic in the stats input parameter having a value that is a vector of the corresponding value-column bin statistics. In a addition the key count has a value that is a histogram of the binning-column.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
aggregate_unique
(table_name=None, column_name=None, offset=0, limit=-9999, encoding='binary', options={})[source]¶ Returns all the unique values from a particular column (specified by input parameter column_name) of a particular table or view (specified by input parameter table_name). If input parameter column_name is a numeric column the values will be in output parameter binary_encoded_response. Otherwise if input parameter column_name is a string column the values will be in output parameter json_encoded_response. The results can be paged via the input parameter offset and input parameter limit parameters.
Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order input parameter options would be:
{"limit":"10","sort_order":"descending"}.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a result_table name is specified in the input parameter options, the results are stored in a new table with that name–no results are returned in the response. Both the table name and resulting column name must adhere to standard naming conventions; any column expression will need to be aliased. If the source table’s shard key is used as the input parameter column_name, the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available if the value of input parameter column_name is an unrestricted-length string.
Parameters
- table_name (str) –
- Name of an existing table or view on which the operation will be performed.
- column_name (str) –
- Name of the column or an expression containing one or more column names on which the unique function would be applied.
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use output parameter has_more_records to see if more records exist in the result to be fetched, and input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records. Allowed values are:
- binary – Indicates that the returned records should be binary encoded.
- json – Indicates that the returned records should be json encoded.
The default value is ‘binary’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the table specified in result_table. If the collection provided is non-existent, the collection will be automatically created. If empty, then the table will be a top-level table.
expression – Optional filter expression to apply to the table.
sort_order – String indicating how the returned values should be sorted. Allowed values are:
- ascending
- descending
The default value is ‘ascending’.
result_table – The name of the table used to store the results. If present, no results are returned in the response. Has the same naming restrictions as tables. Not available if input parameter column_name is an unrestricted-length string.
result_table_persist – If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. Allowed values are:
- true
- false
The default value is ‘false’.
result_table_force_replicated – Force the result table to be replicated (ignores any sharding). Must be used in combination with the result_table option. Allowed values are:
- true
- false
The default value is ‘false’.
result_table_generate_pk – If true then set a primary key for the result table. Must be used in combination with the result_table option. Allowed values are:
- true
- false
The default value is ‘false’.
ttl – Sets the TTL of the table specified in result_table.
chunk_size – Indicates the number of records per chunk to be used for the result table. Must be used in combination with the result_table option.
view_id – ID of view of which the result table will be a member. The default value is ‘’.
Returns
A dict with the following entries–
- table_name (str) –
- The same table name as was passed in the parameter list.
- response_schema_str (str) –
- Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
- binary_encoded_response (str) –
- Avro binary encoded response.
- json_encoded_response (str) –
- Avro JSON encoded response.
- has_more_records (bool) –
- Too many records. Returned a partial set.
- info (dict of str to str) –
- Additional information.
- record_type (
RecordType
or None) – - A
RecordType
object using which the user can decode the binarydata by usingGPUdbRecord.decode_binary_data()
. If JSON encodingis used, then None.
-
GPUdb.
aggregate_unique_and_decode
(table_name=None, column_name=None, offset=0, limit=-9999, encoding='binary', options={}, record_type=None, force_primitive_return_types=True, get_column_major=True)[source]¶ Returns all the unique values from a particular column (specified by input parameter column_name) of a particular table or view (specified by input parameter table_name). If input parameter column_name is a numeric column the values will be in output parameter binary_encoded_response. Otherwise if input parameter column_name is a string column the values will be in output parameter json_encoded_response. The results can be paged via the input parameter offset and input parameter limit parameters.
Columns marked as store-only are unable to be used with this function.
To get the first 10 unique values sorted in descending order input parameter options would be:
{"limit":"10","sort_order":"descending"}.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
If a result_table name is specified in the input parameter options, the results are stored in a new table with that name–no results are returned in the response. Both the table name and resulting column name must adhere to standard naming conventions; any column expression will need to be aliased. If the source table’s shard key is used as the input parameter column_name, the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available if the value of input parameter column_name is an unrestricted-length string.
Parameters
- table_name (str) –
- Name of an existing table or view on which the operation will be performed.
- column_name (str) –
- Name of the column or an expression containing one or more column names on which the unique function would be applied.
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use output parameter has_more_records to see if more records exist in the result to be fetched, and input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records. Allowed values are:
- binary – Indicates that the returned records should be binary encoded.
- json – Indicates that the returned records should be json encoded.
The default value is ‘binary’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the table specified in result_table. If the collection provided is non-existent, the collection will be automatically created. If empty, then the table will be a top-level table.
expression – Optional filter expression to apply to the table.
sort_order – String indicating how the returned values should be sorted. Allowed values are:
- ascending
- descending
The default value is ‘ascending’.
result_table – The name of the table used to store the results. If present, no results are returned in the response. Has the same naming restrictions as tables. Not available if input parameter column_name is an unrestricted-length string.
result_table_persist – If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. Allowed values are:
- true
- false
The default value is ‘false’.
result_table_force_replicated – Force the result table to be replicated (ignores any sharding). Must be used in combination with the result_table option. Allowed values are:
- true
- false
The default value is ‘false’.
result_table_generate_pk – If true then set a primary key for the result table. Must be used in combination with the result_table option. Allowed values are:
- true
- false
The default value is ‘false’.
ttl – Sets the TTL of the table specified in result_table.
chunk_size – Indicates the number of records per chunk to be used for the result table. Must be used in combination with the result_table option.
view_id – ID of view of which the result table will be a member. The default value is ‘’.
- record_type (
RecordType
or None) – - The record type expected in the results, or None to determinethe appropriate type automatically. If known, providing thismay improve performance in binary mode. Not used in JSON mode.The default value is None.
- force_primitive_return_types (bool) –
- If True, then OrderedDict objects will be returned, where
string sub-type columns will have their values converted back
to strings; for example, the Python datetime structs, used
for datetime type columns would have their values returned as
strings. If False, then
Record
objects will be returned, which for string sub-types, will return native or custom structs; no conversion to string takes place. String conversions, when returning OrderedDicts, incur a speed penalty, and it is strongly recommended to use theRecord
object option instead. If True, but none of the returned columns require a conversion, then the originalRecord
objects will be returned. Default value is True. - get_column_major (bool) –
- Indicates if the decoded records will be transposed to be column-major or returned as is (row-major). Default value is True.
Returns
A dict with the following entries–
- table_name (str) –
- The same table name as was passed in the parameter list.
- response_schema_str (str) –
- Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
- has_more_records (bool) –
- Too many records. Returned a partial set.
- info (dict of str to str) –
- Additional information.
- records (list of
Record
) – - A list of
Record
objects which contain the decoded records.
-
GPUdb.
aggregate_unpivot
(table_name=None, column_names=None, variable_column_name='', value_column_name='', pivoted_columns=None, encoding='binary', options={})[source]¶ Rotate the column values into rows values.
For unpivot details and examples, see Unpivot. For limitations, see Unpivot Limitations.
Unpivot is used to normalize tables that are built for cross tabular reporting purposes. The unpivot operator rotates the column values for all the pivoted columns. A variable column, value column and all columns from the source table except the unpivot columns are projected into the result table. The variable column and value columns in the result table indicate the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
Parameters
- table_name (str) –
- Name of the table on which the operation will be performed. Must be an existing table/view.
- column_names (list of str) –
- List of column names or expressions. A wildcard ‘*’ can be used to include all the non-pivoted columns from the source table. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- variable_column_name (str) –
- Specifies the variable/parameter column name. The default value is ‘’.
- value_column_name (str) –
- Specifies the value column name. The default value is ‘’.
- pivoted_columns (list of str) –
- List of one or more values typically the column names of the input table. All the columns in the source table must have the same data type. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- encoding (str) –
Specifies the encoding for returned records. Allowed values are:
- binary – Indicates that the returned records should be binary encoded.
- json – Indicates that the returned records should be json encoded.
The default value is ‘binary’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the table specified in result_table. If the collection provided is non-existent, the collection will be automatically created. If empty, then the table will be a top-level table.
result_table – The name of the table used to store the results. Has the same naming restrictions as tables. If present, no results are returned in the response.
result_table_persist – If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. Allowed values are:
- true
- false
The default value is ‘false’.
expression – Filter expression to apply to the table prior to unpivot processing.
order_by – Comma-separated list of the columns to be sorted by; e.g. ‘timestamp asc, x desc’. The columns specified must be present in input table. If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ‘’.
chunk_size – Indicates the number of records per chunk to be used for the result table. Must be used in combination with the result_table option.
limit – The number of records to keep. The default value is ‘’.
ttl – Sets the TTL of the table specified in result_table.
view_id – view this result table is part of. The default value is ‘’.
materialize_on_gpu – No longer used. See Resource Management Concepts for information about how resources are managed, Tier Strategy Concepts for how resources are targeted for VRAM, and Tier Strategy Usage for how to specify a table’s priority in VRAM. Allowed values are:
- true
- false
The default value is ‘false’.
create_indexes – Comma-separated list of columns on which to create indexes on the table specified in result_table. The columns specified must be present in output column names. If any alias is given for any column name, the alias must be used, rather than the original column name.
result_table_force_replicated – Force the result table to be replicated (ignores any sharding). Must be used in combination with the result_table option. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- table_name (str) –
- Typically shows the result-table name if provided in the request (Ignore otherwise).
- response_schema_str (str) –
- Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
- binary_encoded_response (str) –
- Avro binary encoded response.
- json_encoded_response (str) –
- Avro JSON encoded response.
- total_number_of_records (long) –
- Total/Filtered number of records.
- has_more_records (bool) –
- Too many records. Returned a partial set.
- info (dict of str to str) –
- Additional information.
- record_type (
RecordType
or None) – - A
RecordType
object using which the user can decode the binarydata by usingGPUdbRecord.decode_binary_data()
. If JSON encodingis used, then None.
-
GPUdb.
aggregate_unpivot_and_decode
(table_name=None, column_names=None, variable_column_name='', value_column_name='', pivoted_columns=None, encoding='binary', options={}, record_type=None, force_primitive_return_types=True, get_column_major=True)[source]¶ Rotate the column values into rows values.
For unpivot details and examples, see Unpivot. For limitations, see Unpivot Limitations.
Unpivot is used to normalize tables that are built for cross tabular reporting purposes. The unpivot operator rotates the column values for all the pivoted columns. A variable column, value column and all columns from the source table except the unpivot columns are projected into the result table. The variable column and value columns in the result table indicate the pivoted column name and values respectively.
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
Parameters
- table_name (str) –
- Name of the table on which the operation will be performed. Must be an existing table/view.
- column_names (list of str) –
- List of column names or expressions. A wildcard ‘*’ can be used to include all the non-pivoted columns from the source table. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- variable_column_name (str) –
- Specifies the variable/parameter column name. The default value is ‘’.
- value_column_name (str) –
- Specifies the value column name. The default value is ‘’.
- pivoted_columns (list of str) –
- List of one or more values typically the column names of the input table. All the columns in the source table must have the same data type. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- encoding (str) –
Specifies the encoding for returned records. Allowed values are:
- binary – Indicates that the returned records should be binary encoded.
- json – Indicates that the returned records should be json encoded.
The default value is ‘binary’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the table specified in result_table. If the collection provided is non-existent, the collection will be automatically created. If empty, then the table will be a top-level table.
result_table – The name of the table used to store the results. Has the same naming restrictions as tables. If present, no results are returned in the response.
result_table_persist – If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. Allowed values are:
- true
- false
The default value is ‘false’.
expression – Filter expression to apply to the table prior to unpivot processing.
order_by – Comma-separated list of the columns to be sorted by; e.g. ‘timestamp asc, x desc’. The columns specified must be present in input table. If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ‘’.
chunk_size – Indicates the number of records per chunk to be used for the result table. Must be used in combination with the result_table option.
limit – The number of records to keep. The default value is ‘’.
ttl – Sets the TTL of the table specified in result_table.
view_id – view this result table is part of. The default value is ‘’.
materialize_on_gpu – No longer used. See Resource Management Concepts for information about how resources are managed, Tier Strategy Concepts for how resources are targeted for VRAM, and Tier Strategy Usage for how to specify a table’s priority in VRAM. Allowed values are:
- true
- false
The default value is ‘false’.
create_indexes – Comma-separated list of columns on which to create indexes on the table specified in result_table. The columns specified must be present in output column names. If any alias is given for any column name, the alias must be used, rather than the original column name.
result_table_force_replicated – Force the result table to be replicated (ignores any sharding). Must be used in combination with the result_table option. Allowed values are:
- true
- false
The default value is ‘false’.
- record_type (
RecordType
or None) – - The record type expected in the results, or None to determinethe appropriate type automatically. If known, providing thismay improve performance in binary mode. Not used in JSON mode.The default value is None.
- force_primitive_return_types (bool) –
- If True, then OrderedDict objects will be returned, where
string sub-type columns will have their values converted back
to strings; for example, the Python datetime structs, used
for datetime type columns would have their values returned as
strings. If False, then
Record
objects will be returned, which for string sub-types, will return native or custom structs; no conversion to string takes place. String conversions, when returning OrderedDicts, incur a speed penalty, and it is strongly recommended to use theRecord
object option instead. If True, but none of the returned columns require a conversion, then the originalRecord
objects will be returned. Default value is True. - get_column_major (bool) –
- Indicates if the decoded records will be transposed to be column-major or returned as is (row-major). Default value is True.
Returns
A dict with the following entries–
- table_name (str) –
- Typically shows the result-table name if provided in the request (Ignore otherwise).
- response_schema_str (str) –
- Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
- total_number_of_records (long) –
- Total/Filtered number of records.
- has_more_records (bool) –
- Too many records. Returned a partial set.
- info (dict of str to str) –
- Additional information.
- records (list of
Record
) – - A list of
Record
objects which contain the decoded records.
-
GPUdb.
alter_resource_group
(name=None, tier_attributes={}, ranking='', adjoining_resource_group='', options={})[source]¶ Alters the properties of an exisiting resource group to facilitate resource management.
Parameters
- name (str) –
- Name of the group to be altered. Must be an existing resource group name.
- tier_attributes (dict of str to dicts of str to str) –
Optional map containing tier names and their respective attribute group limits. The only valid attribute limit that can be set is max_memory (in bytes) for the VRAM & RAM tiers.
For instance, to set max VRAM capacity to 1GB and max RAM capacity to 10GB, use: {‘VRAM’:{‘max_memory’:‘1000000000’}, ‘RAM’:{‘max_memory’:‘10000000000’}}. The default value is an empty dict ( {} ). Allowed keys are:
- max_memory – Maximum amount of memory usable in the given tier at one time for this group.
- ranking (str) –
If the resource group ranking is to be updated, this indicates the relative ranking among existing resource groups where this resource group will be moved; leave blank if not changing the ranking. When using before or after, specify which resource group this one will be inserted before or after in input parameter adjoining_resource_group. Allowed values are:
- first
- last
- before
- after
The default value is ‘’.
- adjoining_resource_group (str) –
- If input parameter ranking is before or after, this field indicates the resource group before or after which the current group will be placed; otherwise, leave blank. The default value is ‘’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
max_cpu_concurrency – Maximum number of simultaneous threads that will be used to execute a request for this group.
max_scheduling_priority – Maximum priority of a scheduled task for this group.
max_tier_priority – Maximum priority of a tiered object for this group.
is_default_group – If true, this request applies to the global default resource group. It is an error for this field to be true when the input parameter name field is also populated. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
alter_role
(name=None, action=None, value=None, options={})[source]¶ Alters a Role.
Parameters
- name (str) –
- Name of the role to be altered. Must be an existing role.
- action (str) –
Modification operation to be applied to the role. Allowed values are:
- set_resource_group – Sets the resource group for an internal role. The resource group must exist, otherwise, an empty string assigns the role to the default resource group.
- value (str) –
- The value of the modification, depending on input parameter action.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
alter_system_properties
(property_updates_map=None, options={})[source]¶ The
alter_system_properties()
endpoint is primarily used to simplify the testing of the system and is not expected to be used during normal execution. Commands are given through the input parameter property_updates_map whose keys are commands and values are strings representing integer values (for example ‘8000’) or boolean values (‘true’ or ‘false’).Parameters
- property_updates_map (dict of str to str) –
Map containing the properties of the system to be updated. Error if empty. Allowed keys are:
- sm_omp_threads – Set the number of OpenMP threads that will be used to service filter & aggregation requests against collections to the specified integer value.
- kernel_omp_threads – Set the number of kernel OpenMP threads to the specified integer value.
- concurrent_kernel_execution –
Enables concurrent kernel execution if the value is true
and disables it if the value is false.
Allowed values are:
- true
- false
- subtask_concurrency_limit – Sets the maximum number of simultaneous threads allocated to a given request, on each rank. Note that thread allocation may also be limted by resource group limits and/or system load.
- chunk_size – Sets the number of records per chunk to be used for all new tables.
- evict_columns – Attempts to evict columns from memory to the persistent store. Value string is a semicolon separated list of entries, each entry being a table name optionally followed by a comma and a comma separated list of column names to attempt to evict. An empty value string will attempt to evict all tables and columns.
- execution_mode – Sets the execution_mode for kernel executions to the specified string value. Possible values are host, device, default (engine decides) or an integer value that indicates max chunk size to exec on host
- external_files_directory – Sets the root directory path where external table data files are accessed from. Path must exist on the head node
- flush_to_disk – Flushes any changes to any tables to the persistent store. These changes include updates to the vector store, object store, and text search store, Value string is ignored
- clear_cache – Clears cached results. Useful to allow repeated timing of endpoints. Value string is the name of the table for which to clear the cached results, or an empty string to clear the cached results for all tables.
- communicator_test – Invoke the communicator test and report timing results. Value string is is a semicolon separated list of [key]=[value] expressions. Expressions are: num_transactions=[num] where num is the number of request reply transactions to invoke per test; message_size=[bytes] where bytes is the size in bytes of the messages to send; check_values=[enabled] where if enabled is true the value of the messages received are verified.
- set_message_timers_enabled –
Enables the communicator test to collect additional timing
statistics when the value string is true. Disables the
collection when the value string is false
Allowed values are:
- true
- false
- network_speed – Invoke the network speed test and report timing results. Value string is a semicolon-separated list of [key]=[value] expressions. Valid expressions are: seconds=[time] where time is the time in seconds to run the test; data_size=[bytes] where bytes is the size in bytes of the block to be transferred; threads=[number of threads]; to_ranks=[space-separated list of ranks] where the list of ranks is the ranks that rank 0 will send data to and get data from. If to_ranks is unspecified then all worker ranks are used.
- request_timeout –
Number of minutes after which filtering (e.g.,
filter()
) and aggregating (e.g.,aggregate_group_by()
) queries will timeout. The default value is ‘20’. - max_get_records_size – The maximum number of records the database will serve for a given data retrieval call. The default value is ‘20000’.
- enable_audit – Enable or disable auditing.
- audit_headers – Enable or disable auditing of request headers.
- audit_body – Enable or disable auditing of request bodies.
- audit_data – Enable or disable auditing of request data.
- shadow_agg_size – Size of the shadow aggregate chunk cache in bytes. The default value is ‘10000000’.
- shadow_filter_size – Size of the shdow filter chunk cache in bytes. The default value is ‘10000000’.
- synchronous_compression – compress vector on set_compression (instead of waiting for background thread). The default value is ‘false’.
- enable_overlapped_equi_join – Enable overlapped-equi-join filter. The default value is ‘true’.
- enable_compound_equi_join – Enable compound-equi-join filter plan type. The default value is ‘false’.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- updated_properties_map (dict of str to str) –
- map of values updated, For speed tests a map of values measured to the measurement
- info (dict of str to str) –
- Additional information.
-
GPUdb.
alter_table
(table_name=None, action=None, value=None, options={})[source]¶ Apply various modifications to a table, view, or collection. The available modifications include the following:
Manage a table’s columns–a column can be added, removed, or have its type and properties modified, including whether it is compressed or not.
Create or delete an index on a particular column. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Create or delete a foreign key on a particular column.
Manage a range-partitioned or a manual list-partitioned table’s partitions.
Set (or reset) the tier strategy of a table or view.
Refresh and manage the refresh mode of a materialized view.
Set the time-to-live (TTL). This can be applied to tables, views, or collections. When applied to collections, every contained table & view that is not protected will have its TTL set to the given value.
Set the global access mode (i.e. locking) for a table. This setting trumps any role-based access controls that may be in place; e.g., a user with write access to a table marked read-only will not be able to insert records into it. The mode can be set to read-only, write-only, read/write, and no access.
Change the protection mode to prevent or allow automatic expiration. This can be applied to tables, views, and collections.
Parameters
- table_name (str) –
- Table on which the operation will be performed. Must be an existing table, view, or collection.
- action (str) –
Modification operation to be applied Allowed values are:
- allow_homogeneous_tables – No longer supported; action will be ignored.
- create_index – Creates either a column (attribute) index or chunk skip index, depending on the specified index_type, on the column name specified in input parameter value. If this column already has the specified index, an error will be returned.
- delete_index – Deletes either a column (attribute) index or chunk skip index, depending on the specified index_type, on the column name specified in input parameter value. If this column does not have the specified index, an error will be returned.
- move_to_collection – Moves a table or view into a collection named input parameter value. If the collection provided is non-existent, the collection will be automatically created. If input parameter value is empty, then the table or view will be top-level.
- protected – Sets whether the given input parameter table_name should be protected or not. The input parameter value must be either ‘true’ or ‘false’.
- rename_table – Renames a table, view or collection to input parameter value. Has the same naming restrictions as tables.
- ttl – Sets the time-to-live in minutes of the table, view, or collection specified in input parameter table_name.
- add_column – Adds the column specified in input parameter value to the table specified in input parameter table_name. Use column_type and column_properties in input parameter options to set the column’s type and properties, respectively.
- change_column – Changes type and properties of the column specified in input parameter value. Use column_type and column_properties in input parameter options to set the column’s type and properties, respectively. Note that primary key and/or shard key columns cannot be changed. All unchanging column properties must be listed for the change to take place, e.g., to add dictionary encoding to an existing ‘char4’ column, both ‘char4’ and ‘dict’ must be specified in the input parameter options map.
- set_column_compression – Modifies the compression setting on the column specified in input parameter value to the compression type specified in compression_type.
- delete_column – Deletes the column specified in input parameter value from the table specified in input parameter table_name.
- create_foreign_key – Creates a foreign key specified in input parameter value using the format ‘(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]’.
- delete_foreign_key – Deletes a foreign key. The input parameter value should be the foreign_key_name specified when creating the key or the complete string used to define it.
- add_partition – Adds the partition specified in input parameter value, to either a range-partitioned or manual list-partitioned table.
- remove_partition – Removes the partition specified in input parameter value (and relocates all of its data to the default partition) from either a range-partitioned or manual list-partitioned table.
- delete_partition – Deletes the partition specified in input parameter value (and all of its data) from either a range-partitioned or manual list-partitioned table.
- set_global_access_mode – Sets the global access mode (i.e. locking) for the table specified in input parameter table_name. Specify the access mode in input parameter value. Valid modes are ‘no_access’, ‘read_only’, ‘write_only’ and ‘read_write’.
- refresh – Replays all the table creation commands required to create this materialized view.
- set_refresh_method – Sets the method by which this materialized view is refreshed to the method specified in input parameter value - one of ‘manual’, ‘periodic’, ‘on_change’.
- set_refresh_start_time – Sets the time to start periodic refreshes of this materialized view to the datetime string specified in input parameter value with format ‘YYYY-MM-DD HH:MM:SS’. Subsequent refreshes occur at the specified time + N * the refresh period.
- set_refresh_period – Sets the time interval in seconds at which to refresh this materialized view to the value specified in input parameter value. Also, sets the refresh method to periodic if not already set.
- remove_text_search_attributes – Removes text search attribute from all columns.
- set_strategy_definition – Sets the tier strategy for the table and its columns to the one specified in input parameter value, replacing the existing tier strategy in its entirety. See tier strategy usage for format and tier strategy examples for examples.
- value (str) –
- The value of the modification, depending on input parameter action. For example, if input parameter action is add_column, this would be the column name; while the column’s definition would be covered by the column_type, column_properties, column_default_value, and add_column_expression in input parameter options. If input parameter action is ttl, it would be the number of minutes for the new TTL. If input parameter action is refresh, this field would be blank.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
column_default_value – When adding a column, set a default value for existing records. For nullable columns, the default value will be null, regardless of data type.
column_properties – When adding or changing a column, set the column properties (strings, separated by a comma: data, store_only, text_search, char8, int8 etc).
column_type – When adding or changing a column, set the column type (strings, separated by a comma: int, double, string, null etc).
compression_type – When setting column compression (set_column_compression for input parameter action), compression type to use: none (to use no compression) or a valid compression type. Allowed values are:
- none
- snappy
- lz4
- lz4hc
The default value is ‘snappy’.
copy_values_from_column – Deprecated. Please use add_column_expression instead.
rename_column – When changing a column, specify new column name.
validate_change_column – When changing a column, validate the change before applying it. If true, then validate all values. A value too large (or too long) for the new type will prevent any change. If false, then when a value is too large or long, it will be truncated. Allowed values are:
- true – true
- false – false
The default value is ‘true’.
update_last_access_time – Indicates whether the time-to-live (TTL) expiration countdown timer should be reset to the table’s TTL. Allowed values are:
- true – Reset the expiration countdown timer to the table’s configured TTL.
- false – Don’t reset the timer; expiration countdown will continue from where it is, as if the table had not been accessed.
The default value is ‘true’.
add_column_expression – When adding a column, an optional expression to use for the new column’s values. Any valid expression may be used, including one containing references to existing columns in the same table.
strategy_definition – Optional parameter for specifying the tier strategy for the table and its columns when input parameter action is set_strategy_definition, replacing the existing tier strategy in its entirety. See tier strategy usage for format and tier strategy examples for examples. This option will be ignored if input parameter value is also specified.
index_type – Type of index to create, when input parameter action is create_index, or to delete, when input parameter action is delete_index. Allowed values are:
- column – Create or delete a column (attribute) index.
- chunk_skip – Create or delete a chunk skip index.
The default value is ‘column’.
Returns
A dict with the following entries–
- table_name (str) –
- Table on which the operation was performed.
- action (str) –
- Modification operation that was performed.
- value (str) –
- The value of the modification that was performed.
- type_id (str) –
- return the type_id (when changing a table, a new type may be created)
- type_definition (str) –
- return the type_definition (when changing a table, a new type may be created)
- properties (dict of str to lists of str) –
- return the type properties (when changing a table, a new type may be created)
- label (str) –
- return the type label (when changing a table, a new type may be created)
- info (dict of str to str) –
- Additional information.
-
GPUdb.
alter_table_columns
(table_name=None, column_alterations=None, options=None)[source]¶ Apply various modifications to columns in a table, view. The available modifications include the following:
Create or delete an index on a particular column. This can speed up certain operations when using expressions containing equality or relational operators on indexed columns. This only applies to tables.
Manage a table’s columns–a column can be added, removed, or have its type and properties modified.
Set or unset compression for a column.
Parameters
- table_name (str) –
- Table on which the operation will be performed. Must be an existing table or view.
- column_alterations (list of dicts of str to str) –
list of alter table add/delete/change column requests - all for the same table.
each request is a map that includes‘column_name’, ‘action’ and the options specific for the action,
note that the same options as in alter tablerequests but in the same map as the column name and the action. For example: [{‘column_name’:’col_1’,’action’:’change_column’,’rename_column’:’col_2’},
{‘column_name’:’col_1’,’action’:’add_column’,- ‘type’:’int’,’default_value’:‘1’}
- ] The user can provide a single element
(which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
- Optional parameters.
Returns
A dict with the following entries–
- table_name (str) –
- Table on which the operation was performed.
- type_id (str) –
- return the type_id (when changing a table, a new type may be created)
- type_definition (str) –
- return the type_definition (when changing a table, a new type may be created)
- properties (dict of str to lists of str) –
- return the type properties (when changing a table, a new type may be created)
- label (str) –
- return the type label (when changing a table, a new type may be created)
- column_alterations (list of dicts of str to str) –
list of alter table add/delete/change column requests - all for the same table.
each request is a map that includes‘column_name’, ‘action’ and the options specific for the action,
note that the same options as in alter tablerequests but in the same map as the column name and the action. For example: [{‘column_name’:’col_1’,’action’:’change_column’,’rename_column’:’col_2’},
{‘column_name’:’col_1’,’action’:’add_column’,- ‘type’:’int’,’default_value’:‘1’}
- ]
- info (dict of str to str) –
- Additional information.
-
GPUdb.
alter_table_metadata
(table_names=None, metadata_map=None, options={})[source]¶ Updates (adds or changes) metadata for tables. The metadata key and values must both be strings. This is an easy way to annotate whole tables rather than single records within tables. Some examples of metadata are owner of the table, table creation timestamp etc.
Parameters
- table_names (list of str) –
- Names of the tables whose metadata will be updated. All specified tables must exist, or an error will be returned. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- metadata_map (dict of str to str) –
- A map which contains the metadata of the tables that are to be updated. Note that only one map is provided for all the tables; so the change will be applied to every table. If the provided map is empty, then all existing metadata for the table(s) will be cleared.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- table_names (list of str) –
- Value of input parameter table_names.
- metadata_map (dict of str to str) –
- Value of input parameter metadata_map.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
alter_tier
(name=None, options={})[source]¶ Alters properties of an exisiting tier to facilitate resource management.
To disable watermark-based eviction, set both high_watermark and low_watermark to 100.
Parameters
- name (str) –
- Name of the tier to be altered. Must be an existing tier group name.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- capacity – Maximum size in bytes this tier may hold at once.
- high_watermark – Threshold of usage of this tier’s resource that, once exceeded, will trigger watermark-based eviction from this tier.
- low_watermark – Threshold of resource usage that, once fallen below after crossing the high_watermark, will cease watermark-based eviction from this tier.
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
alter_user
(name=None, action=None, value=None, options={})[source]¶ Alters a user.
Parameters
- name (str) –
- Name of the user to be altered. Must be an existing user.
- action (str) –
Modification operation to be applied to the user. Allowed values are:
- set_password – Sets the password of the user. The user must be an internal user.
- set_resource_group – Sets the resource group for an internal user. The resource group must exist, otherwise, an empty string assigns the user to the default resource group.
- value (str) –
- The value of the modification, depending on input parameter action.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
append_records
(table_name=None, source_table_name=None, field_map=None, options={})[source]¶ Append (or insert) all records from a source table (specified by input parameter source_table_name) to a particular target table (specified by input parameter table_name). The field map (specified by input parameter field_map) holds the user specified map of target table column names with their mapped source column names.
Parameters
- table_name (str) –
- The table name for the records to be appended. Must be an existing table.
- source_table_name (str) –
- The source table name to get records from. Must be an existing table name.
- field_map (dict of str to str) –
- Contains the mapping of column names from the target table (specified by input parameter table_name) as the keys, and corresponding column names or expressions (e.g., ‘col_name+1’) from the source table (specified by input parameter source_table_name). Must be existing column names in source table and target table, and their types must be matched. For details on using expressions, see Expressions.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
offset – A positive integer indicating the number of initial results to skip from input parameter source_table_name. Default is 0. The minimum allowed value is 0. The maximum allowed value is MAX_INT. The default value is ‘0’.
limit – A positive integer indicating the maximum number of results to be returned from input parameter source_table_name. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The default value is ‘-9999’.
expression – Optional filter expression to apply to the input parameter source_table_name. The default value is ‘’.
order_by – Comma-separated list of the columns to be sorted by from source table (specified by input parameter source_table_name), e.g., ‘timestamp asc, x desc’. The order_by columns do not have to be present in input parameter field_map. The default value is ‘’.
update_on_existing_pk – Specifies the record collision policy for inserting the source table records (specified by input parameter source_table_name) into the target table (specified by input parameter table_name) table with a primary key. If set to true, any existing target table record with primary key values that match those of a source table record being inserted will be replaced by that new record. If set to false, any existing target table record with primary key values that match those of a source table record being inserted will remain unchanged and the new record discarded. If the specified table does not have a primary key, then this option is ignored. Allowed values are:
- true
- false
The default value is ‘false’.
truncate_strings – If set to true, it allows inserting longer strings into smaller charN string columns by truncating the longer strings to fit. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
table_name (str)
- info (dict of str to str) –
- Additional information.
-
GPUdb.
clear_statistics
(table_name='', column_name='', options={})[source]¶ Clears statistics (cardinality, mean value, etc.) for a column in a specified table.
Parameters
- table_name (str) –
- Name of a table. Must be an existing table. The default value is ‘’.
- column_name (str) –
- Name of the column in input parameter table_name for which to clear statistics. The column must be from an existing table. An empty string clears statistics for all columns in the table. The default value is ‘’.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
- column_name (str) –
- Value of input parameter column_name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
clear_table
(table_name='', authorization='', options={})[source]¶ Clears (drops) one or all tables in the database cluster. The operation is synchronous meaning that the table will be cleared before the function returns. The response payload returns the status of the operation along with the name of the table that was cleared.
Parameters
- table_name (str) –
- Name of the table to be cleared. Must be an existing table. Empty string clears all available tables, though this behavior is be prevented by default via gpudb.conf parameter ‘disable_clear_all’. The default value is ‘’.
- authorization (str) –
- No longer used. User can pass an empty string. The default value is ‘’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
no_error_if_not_exists – If true and if the table specified in input parameter table_name does not exist no error is returned. If false and if the table specified in input parameter table_name does not exist then an error is returned. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name for a given table, or ‘ALL CLEARED’ in case of clearing all tables.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
clear_table_monitor
(topic_id=None, options={})[source]¶ Deactivates a table monitor previously created with
create_table_monitor()
.Parameters
- topic_id (str) –
- The topic ID returned by
create_table_monitor()
. - options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- topic_id (str) –
- Value of input parameter topic_id.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
clear_trigger
(trigger_id=None, options={})[source]¶ Clears or cancels the trigger identified by the specified handle. The output returns the handle of the trigger cleared as well as indicating success or failure of the trigger deactivation.
Parameters
- trigger_id (str) –
- ID for the trigger to be deactivated.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- trigger_id (str) –
- Value of input parameter trigger_id.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
collect_statistics
(table_name=None, column_names=None, options={})[source]¶ Collect statistics for a column(s) in a specified table.
Parameters
- table_name (str) –
- Name of a table. Must be an existing table.
- column_names (list of str) –
- List of one or more column names in input parameter table_name for which to collect statistics (cardinality, mean value, etc.). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
- column_names (list of str) –
- Value of input parameter column_names.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_graph
(graph_name=None, directed_graph=True, nodes=None, edges=None, weights=None, restrictions=None, options={})[source]¶ Creates a new graph network using given nodes, edges, weights, and restrictions.
IMPORTANT: It’s highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.
Parameters
- graph_name (str) –
- Name of the graph resource to generate.
- directed_graph (bool) –
If set to true, the graph will be directed. If set to false, the graph will not be directed. Consult Directed Graphs for more details. Allowed values are:
- true
- false
The default value is True.
- nodes (list of str) –
- Nodes represent fundamental topological units of a graph. Nodes must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS NODE_ID’, expressions, e.g., ‘ST_MAKEPOINT(column1, column2) AS NODE_WKTPOINT’, or constant values, e.g., ‘{9, 10, 11} AS NODE_ID’. If using constant values in an identifier combination, the number of values specified must match across the combination. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- edges (list of str) –
- Edges represent the required fundamental topological unit of a graph that typically connect nodes. Edges must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS EDGE_ID’, expressions, e.g., ‘SUBSTR(column, 1, 6) AS EDGE_NODE1_NAME’, or constant values, e.g., “{‘family’, ‘coworker’} AS EDGE_LABEL”. If using constant values in an identifier combination, the number of values specified must match across the combination. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- weights (list of str) –
- Weights represent a method of informing the graph solver of the cost of including a given edge in a solution. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS WEIGHTS_EDGE_ID’, expressions, e.g., ‘ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED’, or constant values, e.g., ‘{4, 15} AS WEIGHTS_VALUESPECIFIED’. If using constant values in an identifier combination, the number of values specified must match across the combination. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- restrictions (list of str) –
- Restrictions represent a method of informing the graph solver which edges and/or nodes should be ignored for the solution. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS RESTRICTIONS_EDGE_ID’, expressions, e.g., ‘column/2 AS RESTRICTIONS_VALUECOMPARED’, or constant values, e.g., ‘{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED’. If using constant values in an identifier combination, the number of values specified must match across the combination. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
restriction_threshold_value – Value-based restriction comparison. Any node or edge with a RESTRICTIONS_VALUECOMPARED value greater than the restriction_threshold_value will not be included in the graph.
merge_tolerance – If node geospatial positions are input (e.g., WKTPOINT, X, Y), determines the minimum separation allowed between unique nodes. If nodes are within the tolerance of each other, they will be merged as a single node. The default value is ‘1.0E-4’.
min_x – Minimum x (longitude) value for spatial graph associations. The default value is ‘-180.0’.
max_x – Maximum x (longitude) value for spatial graph associations. The default value is ‘180.0’.
min_y – Minimum y (latitude) value for spatial graph associations. The default value is ‘-90.0’.
max_y – Maximum y (latitude) value for spatial graph associations. The default value is ‘90.0’.
recreate – If set to true and the graph (using input parameter graph_name) already exists, the graph is deleted and recreated. Allowed values are:
- true
- false
The default value is ‘false’.
modify – If set to true, recreate is set to true, and the graph (specified using input parameter graph_name) already exists, the graph is updated with the given components. Allowed values are:
- true
- false
The default value is ‘false’.
export_create_results – If set to true, returns the graph topology in the response as arrays. Allowed values are:
- true
- false
The default value is ‘false’.
enable_graph_draw – If set to true, adds a ‘EDGE_WKTLINE’ column identifier to the specified graph_table so the graph can be viewed via WMS; for social and non-geospatial graphs, the ‘EDGE_WKTLINE’ column identifier will be populated with spatial coordinates derived from a flattening layout algorithm so the graph can still be viewed. Allowed values are:
- true
- false
The default value is ‘false’.
save_persist – If set to true, the graph will be saved in the persist directory (see the config reference for more information). If set to false, the graph will be removed when the graph server is shutdown. Allowed values are:
- true
- false
The default value is ‘false’.
sync_db – If set to true and save_persist is set to true, the graph will be fully reconstructed upon a database restart and be updated to align with any source table(s) updates made since the creation of the graph. If dynamic graph updates upon table inserts are desired, use add_table_monitor instead. Allowed values are:
- true
- false
The default value is ‘false’.
add_table_monitor – Adds a table monitor to every table used in the creation of the graph; this table monitor will trigger the graph to update dynamically upon inserts to the source table(s). Note that upon database restart, if save_persist is also set to true, the graph will be fully reconstructed and the table monitors will be reattached. For more details on table monitors, see
create_table_monitor()
. Allowed values are:- true
- false
The default value is ‘false’.
graph_table – If specified, the created graph is also created as a table with the given name and following identifier columns: ‘EDGE_ID’, ‘EDGE_NODE1_ID’, ‘EDGE_NODE2_ID’. If left blank, no table is created. The default value is ‘’.
remove_label_only – When RESTRICTIONS on labeled entities requested, if set to true this will NOT delete the entity but only the label associated with the entity. Otherwise (default), it’ll delete the label AND the entity. Allowed values are:
- true
- false
The default value is ‘false’.
add_turns – Adds dummy ‘pillowed’ edges around intersection nodes where there are more than three edges so that additional weight penalties can be imposed by the solve endpoints. (increases the total number of edges). Allowed values are:
- true
- false
The default value is ‘false’.
turn_angle – Value in degrees modifies the thresholds for attributing right, left, sharp turns, and intersections. It is the vertical deviation angle from the incoming edge to the intersection node. The larger the value, the larger the threshold for sharp turns and intersections; the smaller the value, the larger the threshold for right and left turns; 0 < turn_angle < 90. The default value is ‘60’.
Returns
A dict with the following entries–
- num_nodes (long) –
- Total number of nodes created.
- num_edges (long) –
- Total number of edges created.
- edges_ids (list of longs) –
- Edges given as pairs of node indices. Only populated if export_create_results is set to true.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_job
(endpoint=None, request_encoding='binary', data=None, data_str=None, options={})[source]¶ Create a job which will run asynchronously. The response returns a job ID, which can be used to query the status and result of the job. The status and the result of the job upon completion can be requested by
get_job()
.Parameters
- endpoint (str) –
- Indicates which endpoint to execute, e.g. ‘/alter/table’.
- request_encoding (str) –
The encoding of the request payload for the job. Allowed values are:
- binary
- json
- snappy
The default value is ‘binary’.
- data (str) –
- Binary-encoded payload for the job to be run asynchronously. The payload must contain the relevant input parameters for the endpoint indicated in input parameter endpoint. Please see the documentation for the appropriate endpoint to see what values must (or can) be specified. If this parameter is used, then input parameter request_encoding must be binary or snappy.
- data_str (str) –
- JSON-encoded payload for the job to be run asynchronously. The payload must contain the relevant input parameters for the endpoint indicated in input parameter endpoint. Please see the documentation for the appropriate endpoint to see what values must (or can) be specified. If this parameter is used, then input parameter request_encoding must be json.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- job_id (long) –
- An identifier for the job created by this call.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_join_table
(join_table_name=None, table_names=None, column_names=None, expressions=[], options={})[source]¶ Creates a table that is the result of a SQL JOIN.
For join details and examples see: Joins. For limitations, see Join Limitations and Cautions.
Parameters
- join_table_name (str) –
- Name of the join table to be created. Has the same naming restrictions as tables.
- table_names (list of str) –
- The list of table names composing the join. Corresponds to a SQL statement FROM clause. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- column_names (list of str) –
- List of member table columns or column expressions to be included in the join. Columns can be prefixed with ‘table_id.column_name’, where ‘table_id’ is the table name or alias. Columns can be aliased via the syntax ‘column_name as alias’. Wild cards ‘*’ can be used to include all columns across member tables or ‘table_id.*’ for all of a single table’s columns. Columns and column expressions composing the join must be uniquely named or aliased–therefore, the ‘*’ wild card cannot be used if column names aren’t unique across all tables. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- expressions (list of str) –
- An optional list of expressions to combine and filter the joined tables. Corresponds to a SQL statement WHERE clause. For details see: expressions. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the join. If the collection provided is non-existent, the collection will be automatically created. If empty, then the join will be at the top level. The default value is ‘’.
max_query_dimensions – Obsolete in GPUdb v7.0
optimize_lookups – Use more memory to speed up the joining of tables. Allowed values are:
- true
- false
The default value is ‘false’.
ttl – Sets the TTL of the join table specified in input parameter join_table_name.
view_id – view this projection is part of. The default value is ‘’.
no_count – return a count of 0 for the join table for logging and for show_table. optimization needed for large overlapped equi-join stencils. The default value is ‘false’.
chunk_size – Maximum number of records per joined-chunk for this table. Defaults to the gpudb.conf file chunk size
Returns
A dict with the following entries–
- join_table_name (str) –
- Value of input parameter join_table_name.
- count (long) –
- The number of records in the join table filtered by the given select expression.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_materialized_view
(table_name=None, options={})[source]¶ Initiates the process of creating a materialized view, reserving the view’s name to prevent other views or tables from being created with that name.
For materialized view details and examples, see Materialized Views.
The response contains output parameter view_id, which is used to tag each subsequent operation (projection, union, aggregation, filter, or join) that will compose the view.
Parameters
- table_name (str) –
- Name of the table to be created that is the top-level table of the materialized view.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created table will be a top-level table.
ttl – Sets the TTL of the table specified in input parameter table_name.
persist – If true, then the materialized view specified in input parameter table_name will be persisted and will not expire unless a ttl is specified. If false, then the materialized view will be an in-memory table and will expire unless a ttl is specified otherwise. Allowed values are:
- true
- false
The default value is ‘false’.
refresh_method – Method by which the join can be refreshed when the data in underlying member tables have changed. Allowed values are:
- manual –
Refresh only occurs when manually requested by calling
alter_table()
with an ‘action’ of ‘refresh’ - on_query – For future use.
- on_change – If possible, incrementally refresh (refresh just those records added) whenever an insert, update, delete or refresh of input table is done. A full refresh is done if an incremental refresh is not possible.
- periodic – Refresh table periodically at rate specified by refresh_period
The default value is ‘manual’.
- manual –
Refresh only occurs when manually requested by calling
refresh_period – When refresh_method is periodic, specifies the period in seconds at which refresh occurs
refresh_start_time – When refresh_method is periodic, specifies the first time at which a refresh is to be done. Value is a datetime string with format ‘YYYY-MM-DD HH:MM:SS’.
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
- view_id (str) –
- Value of view_id.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_proc
(proc_name=None, execution_mode='distributed', files={}, command='', args=[], options={})[source]¶ Creates an instance (proc) of the user-defined function (UDF) specified by the given command, options, and files, and makes it available for execution. For details on UDFs, see: User-Defined Functions
Parameters
- proc_name (str) –
- Name of the proc to be created. Must not be the name of a currently existing proc.
- execution_mode (str) –
The execution mode of the proc. Allowed values are:
- distributed – Input table data will be divided into data segments that are distributed across all nodes in the cluster, and the proc command will be invoked once per data segment in parallel. Output table data from each invocation will be saved to the same node as the corresponding input data.
- nondistributed – The proc command will be invoked only once per execution, and will not have access to any input or output table data.
The default value is ‘distributed’.
- files (dict of str to str) –
- A map of the files that make up the proc. The keys of the map are file names, and the values are the binary contents of the files. The file names may include subdirectory names (e.g. ‘subdir/file’) but must not resolve to a directory above the root for the proc. The default value is an empty dict ( {} ).
- command (str) –
- The command (excluding arguments) that will be invoked when the proc is executed. It will be invoked from the directory containing the proc input parameter files and may be any command that can be resolved from that directory. It need not refer to a file actually in that directory; for example, it could be ‘java’ if the proc is a Java application; however, any necessary external programs must be preinstalled on every database node. If the command refers to a file in that directory, it must be preceded with ‘./’ as per Linux convention. If not specified, and exactly one file is provided in input parameter files, that file will be invoked. The default value is ‘’.
- args (list of str) –
- An array of command-line arguments that will be passed to input parameter command when the proc is executed. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- max_concurrency_per_node – The maximum number of concurrent instances of the proc that will be executed per node. 0 allows unlimited concurrency. The default value is ‘0’.
Returns
A dict with the following entries–
- proc_name (str) –
- Value of input parameter proc_name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_projection
(table_name=None, projection_name=None, column_names=None, options={})[source]¶ Creates a new projection of an existing table. A projection represents a subset of the columns (potentially including derived columns) of a table.
For projection details and examples, see Projections. For limitations, see Projection Limitations and Cautions.
Window functions, which can perform operations like moving averages, are available through this endpoint as well as
get_records_by_column()
.A projection can be created with a different shard key than the source table. By specifying shard_key, the projection will be sharded according to the specified columns, regardless of how the source table is sharded. The source table can even be unsharded or replicated.
If input parameter table_name is empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
Parameters
- table_name (str) –
- Name of the existing table on which the projection is to be applied. An empty table name creates a projection from a single-row virtual table, where columns specified should be constants or constant expressions.
- projection_name (str) –
- Name of the projection to be created. Has the same naming restrictions as tables.
- column_names (list of str) –
- List of columns from input parameter table_name to be included in the projection. Can include derived columns. Can be specified as aliased via the syntax ‘column_name as alias’. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection to which the projection is to be assigned as a child. If the collection provided is non-existent, the collection will be automatically created. If empty, then the projection will be at the top level. The default value is ‘’.
expression – An optional filter expression to be applied to the source table prior to the projection. The default value is ‘’.
is_replicated – If true then the projection will be replicated even if the source table is not. Allowed values are:
- true
- false
The default value is ‘false’.
limit – The number of records to keep. The default value is ‘’.
order_by – Comma-separated list of the columns to be sorted by; e.g. ‘timestamp asc, x desc’. The columns specified must be present in input parameter column_names. If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ‘’.
materialize_on_gpu – No longer used. See Resource Management Concepts for information about how resources are managed, Tier Strategy Concepts for how resources are targeted for VRAM, and Tier Strategy Usage for how to specify a table’s priority in VRAM. Allowed values are:
- true
- false
The default value is ‘false’.
chunk_size – Indicates the number of records per chunk to be used for this projection.
create_indexes – Comma-separated list of columns on which to create indexes on the projection. The columns specified must be present in input parameter column_names. If any alias is given for any column name, the alias must be used, rather than the original column name.
ttl – Sets the TTL of the projection specified in input parameter projection_name.
shard_key – Comma-separated list of the columns to be sharded on; e.g. ‘column1, column2’. The columns specified must be present in input parameter column_names. If any alias is given for any column name, the alias must be used, rather than the original column name. The default value is ‘’.
persist – If true, then the projection specified in input parameter projection_name will be persisted and will not expire unless a ttl is specified. If false, then the projection will be an in-memory table and will expire unless a ttl is specified otherwise. Allowed values are:
- true
- false
The default value is ‘false’.
preserve_dict_encoding – If true, then columns that were dict encoded in the source table will be dict encoded in the projection. Allowed values are:
- true
- false
The default value is ‘true’.
retain_partitions – Determines whether the created projection will retain the partitioning scheme from the source table. Allowed values are:
- true
- false
The default value is ‘false’.
view_id – ID of view of which this projection is a member. The default value is ‘’.
Returns
A dict with the following entries–
- projection_name (str) –
- Value of input parameter projection_name.
- info (dict of str to str) –
Additional information. The default value is an empty dict ( {} ). Allowed keys are:
- count – Number of records in the final table
-
GPUdb.
create_resource_group
(name=None, tier_attributes={}, ranking=None, adjoining_resource_group='', options={})[source]¶ Creates a new resource group to facilitate resource management.
Parameters
- name (str) –
- Name of the group to be created. Must contain only letters, digits, and underscores, and cannot begin with a digit. Must not match existing resource group name.
- tier_attributes (dict of str to dicts of str to str) –
Optional map containing tier names and their respective attribute group limits. The only valid attribute limit that can be set is max_memory (in bytes) for the VRAM & RAM tiers.
For instance, to set max VRAM capacity to 1GB and max RAM capacity to 10GB, use: {‘VRAM’:{‘max_memory’:‘1000000000’}, ‘RAM’:{‘max_memory’:‘10000000000’}}. The default value is an empty dict ( {} ). Allowed keys are:
- max_memory – Maximum amount of memory usable in the given tier at one time for this group.
- ranking (str) –
Indicates the relative ranking among existing resource groups where this new resource group will be placed. When using before or after, specify which resource group this one will be inserted before or after in input parameter adjoining_resource_group. Allowed values are:
- first
- last
- before
- after
- adjoining_resource_group (str) –
- If input parameter ranking is before or after, this field indicates the resource group before or after which the current group will be placed; otherwise, leave blank. The default value is ‘’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- max_cpu_concurrency – Maximum number of simultaneous threads that will be used to execute a request for this group.
- max_scheduling_priority – Maximum priority of a scheduled task for this group.
- max_tier_priority – Maximum priority of a tiered object for this group.
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_role
(name=None, options={})[source]¶ Creates a new role.
Parameters
- name (str) –
- Name of the role to be created. Must contain only lowercase letters, digits, and underscores, and cannot begin with a digit. Must not be the same name as an existing user or role.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- resource_group – Name of an existing resource group to associate with this user
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_table
(table_name=None, type_id=None, options={})[source]¶ Creates a new table or collection. If a new table is being created, the type of the table is given by input parameter type_id, which must be the ID of a currently registered type (i.e. one created via
create_type()
). The table will be created inside a collection if the option collection_name is specified. If that collection does not already exist, it will be created.To create a new collection, specify the name of the collection in input parameter table_name and set the is_collection option to true; input parameter type_id will be ignored.
A table may optionally be designated to use a replicated distribution scheme, have foreign keys to other tables assigned, be assigned a partitioning scheme, or have a tier strategy assigned.
Parameters
- table_name (str) –
- Name of the table to be created. Error for requests with existing table of the same name and type ID may be suppressed by using the no_error_if_exists option. See Tables for naming restrictions.
- type_id (str) –
- ID of a currently registered type. All objects added to the newly created table will be of this type. Ignored if is_collection is true.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
no_error_if_exists – If true, prevents an error from occurring if the table already exists and is of the given type. If a table with the same ID but a different type exists, it is still an error. Allowed values are:
- true
- false
The default value is ‘false’.
collection_name – Name of a collection which is to contain the newly created table. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created table will be a top-level table.
is_collection – Indicates whether the new table to be created will be a collection. Allowed values are:
- true
- false
The default value is ‘false’.
disallow_homogeneous_tables – No longer supported; value will be ignored. Allowed values are:
- true
- false
The default value is ‘false’.
is_replicated – For a table, affects the distribution scheme for the table’s data. If true and the given type has no explicit shard key defined, the table will be replicated. If false, the table will be sharded according to the shard key specified in the given input parameter type_id, or randomly sharded, if no shard key is specified. Note that a type containing a shard key cannot be used to create a replicated table. Allowed values are:
- true
- false
The default value is ‘false’.
foreign_keys – Semicolon-separated list of foreign keys, of the format ‘(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]’.
foreign_shard_key – Foreign shard key of the format ‘source_column references shard_by_column from target_table(primary_key_column)’.
partition_type – Partitioning scheme to use. Allowed values are:
- RANGE – Use range partitioning.
- INTERVAL – Use interval partitioning.
- LIST – Use list partitioning.
- HASH – Use hash partitioning.
partition_keys – Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined by partition_definitions.
partition_definitions – Comma-separated list of partition definitions, whose format depends on the choice of partition_type. See range partitioning, interval partitioning, list partitioning, or hash partitioning for example formats.
is_automatic_partition – If true, a new partition will be created for values which don’t fall into an existing partition. Currently only supported for list partitions. Allowed values are:
- true
- false
The default value is ‘false’.
ttl – For a table, sets the TTL of the table specified in input parameter table_name.
chunk_size – Indicates the number of records per chunk to be used for this table.
is_result_table – For a table, indicates whether the table is an in-memory table. A result table cannot contain store_only, text_search, or string columns (charN columns are acceptable), and it will not be retained if the server is restarted. Allowed values are:
- true
- false
The default value is ‘false’.
strategy_definition – The tier strategy for the table and its columns. See tier strategy usage for format and tier strategy examples for examples.
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
- type_id (str) –
- Value of input parameter type_id.
- is_collection (bool) –
- Indicates if the created entity is a collection.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_table_monitor
(table_name=None, options={})[source]¶ Creates a monitor that watches for a single table modification event type (insert, update, or delete) on a particular table (identified by input parameter table_name) and forwards event notifications to subscribers via ZMQ. After this call completes, subscribe to the returned output parameter topic_id on the ZMQ table monitor port (default 9002). Each time an operation of the given type on the table completes, a multipart message is published for that topic; the first part contains only the topic ID, and each subsequent part contains one binary-encoded Avro object that corresponds to the event and can be decoded using output parameter type_schema. The monitor will continue to run (regardless of whether or not there are any subscribers) until deactivated with
clear_table_monitor()
.For more information on table monitors, see Table Monitors.
Parameters
- table_name (str) –
- Name of the table to monitor. Must not refer to a collection.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
event – Type of modification event on the target table to be monitored by this table monitor. Allowed values are:
- insert – Get notifications of new record insertions. The new row images are forwarded to the subscribers.
- update – Get notifications of update operations. The modified row count information is forwarded to the subscribers.
- delete – Get notifications of delete operations. The deleted row count information is forwarded to the subscribers.
The default value is ‘insert’.
Returns
A dict with the following entries–
- topic_id (str) –
- The ZMQ topic ID to subscribe to for inserted records.
- table_name (str) –
- Value of input parameter table_name.
- type_schema (str) –
- JSON Avro schema of the table, for use in decoding published records.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_trigger_by_area
(request_id=None, table_names=None, x_column_name=None, x_vector=None, y_column_name=None, y_vector=None, options={})[source]¶ Sets up an area trigger mechanism for two column_names for one or more tables. (This function is essentially the two-dimensional version of
create_trigger_by_range()
.) Once the trigger has been activated, any record added to the listed tables(s) viainsert_records()
with the chosen columns’ values falling within the specified region will trip the trigger. All such records will be queued at the trigger port (by default ‘9001’ but able to be retrieved viashow_system_status()
) for any listening client to collect. Active triggers can be cancelled by using theclear_trigger()
endpoint or by clearing all relevant tables.The output returns the trigger handle as well as indicating success or failure of the trigger activation.
Parameters
- request_id (str) –
- User-created ID for the trigger. The ID can be alphanumeric, contain symbols, and must contain at least one character.
- table_names (list of str) –
- Names of the tables on which the trigger will be activated and maintained. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- x_column_name (str) –
- Name of a numeric column on which the trigger is activated. Usually ‘x’ for geospatial data points.
- x_vector (list of floats) –
- The respective coordinate values for the region on which the trigger is activated. This usually translates to the x-coordinates of a geospatial region. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- y_column_name (str) –
- Name of a second numeric column on which the trigger is activated. Usually ‘y’ for geospatial data points.
- y_vector (list of floats) –
- The respective coordinate values for the region on which the trigger is activated. This usually translates to the y-coordinates of a geospatial region. Must be the same length as xvals. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- trigger_id (str) –
- Value of input parameter request_id.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_trigger_by_range
(request_id=None, table_names=None, column_name=None, min=None, max=None, options={})[source]¶ Sets up a simple range trigger for a column_name for one or more tables. Once the trigger has been activated, any record added to the listed tables(s) via
insert_records()
with the chosen column_name’s value falling within the specified range will trip the trigger. All such records will be queued at the trigger port (by default ‘9001’ but able to be retrieved viashow_system_status()
) for any listening client to collect. Active triggers can be cancelled by using theclear_trigger()
endpoint or by clearing all relevant tables.The output returns the trigger handle as well as indicating success or failure of the trigger activation.
Parameters
- request_id (str) –
- User-created ID for the trigger. The ID can be alphanumeric, contain symbols, and must contain at least one character.
- table_names (list of str) –
- Tables on which the trigger will be active. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- column_name (str) –
- Name of a numeric column_name on which the trigger is activated.
- min (float) –
- The lower bound (inclusive) for the trigger range.
- max (float) –
- The upper bound (inclusive) for the trigger range.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- trigger_id (str) –
- Value of input parameter request_id.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_type
(type_definition=None, label=None, properties={}, options={})[source]¶ Creates a new type describing the layout or schema of a table. The type definition is a JSON string describing the fields (i.e. columns) of the type. Each field consists of a name and a data type. Supported data types are: double, float, int, long, string, and bytes. In addition one or more properties can be specified for each column which customize the memory usage and query availability of that column. Note that some properties are mutually exclusive–i.e. they cannot be specified for any given column simultaneously. One example of mutually exclusive properties are data and store_only.
A single primary key and/or single shard key can be set across one or more columns. If a primary key is specified, then a uniqueness constraint is enforced, in that only a single object can exist with a given primary key. When
inserting
data into a table with a primary key, depending on the parameters in the request, incoming objects with primary key values that match existing objects will either overwrite (i.e. update) the existing object or will be skipped and not added into the set.Example of a type definition with some of the parameters:
{"type":"record", "name":"point", "fields":[{"name":"msg_id","type":"string"}, {"name":"x","type":"double"}, {"name":"y","type":"double"}, {"name":"TIMESTAMP","type":"double"}, {"name":"source","type":"string"}, {"name":"group_id","type":"string"}, {"name":"OBJECT_ID","type":"string"}] }
Properties:
{"group_id":["store_only"], "msg_id":["store_only","text_search"] }
Parameters
- type_definition (str) –
- a JSON string describing the columns of the type to be registered.
- label (str) –
- A user-defined description string which can be used to differentiate between tables and types with otherwise identical schemas.
- properties (dict of str to lists of str) –
Each key-value pair specifies the properties to use for a given column where the key is the column name. All keys used must be relevant column names for the given table. Specifying any property overrides the default properties for that column (which is based on the column’s data type). Allowed values are:
- data – Default property for all numeric and string type columns; makes the column available for GPU queries.
- text_search – Valid only for ‘string’ columns. Enables full text search for string columns. Can be set independently of data and store_only.
- store_only –
Persist the column value but do not make it available to
queries (e.g.
filter()
)-i.e. it is mutually exclusive to the data property. Any ‘bytes’ type column must have a store_only property. This property reduces system memory usage. - disk_optimized –
Works in conjunction with the data property for string
columns. This property reduces system disk usage by disabling
reverse string lookups. Queries like
filter()
,filter_by_list()
, andfilter_by_value()
work as usual butaggregate_unique()
andaggregate_group_by()
are not allowed on columns with this property. - timestamp – Valid only for ‘long’ columns. Indicates that this field represents a timestamp and will be provided in milliseconds since the Unix epoch: 00:00:00 Jan 1 1970. Dates represented by a timestamp must fall between the year 1000 and the year 2900.
- ulong – Valid only for ‘string’ columns. It represents an unsigned long integer data type. The string can only be interpreted as an unsigned long data type with minimum value of zero, and maximum value of 18446744073709551615.
- decimal – Valid only for ‘string’ columns. It represents a SQL type NUMERIC(19, 4) data type. There can be up to 15 digits before the decimal point and up to four digits in the fractional part. The value can be positive or negative (indicated by a minus sign at the beginning). This property is mutually exclusive with the text_search property.
- date – Valid only for ‘string’ columns. Indicates that this field represents a date and will be provided in the format ‘YYYY-MM-DD’. The allowable range is 1000-01-01 through 2900-01-01. This property is mutually exclusive with the text_search property.
- time – Valid only for ‘string’ columns. Indicates that this field represents a time-of-day and will be provided in the format ‘HH:MM:SS.mmm’. The allowable range is 00:00:00.000 through 23:59:59.999. This property is mutually exclusive with the text_search property.
- datetime – Valid only for ‘string’ columns. Indicates that this field represents a datetime and will be provided in the format ‘YYYY-MM-DD HH:MM:SS.mmm’. The allowable range is 1000-01-01 00:00:00.000 through 2900-01-01 23:59:59.999. This property is mutually exclusive with the text_search property.
- char1 – This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 1 character.
- char2 – This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 2 characters.
- char4 – This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 4 characters.
- char8 – This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 8 characters.
- char16 – This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 16 characters.
- char32 – This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 32 characters.
- char64 – This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 64 characters.
- char128 – This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 128 characters.
- char256 – This property provides optimized memory, disk and query performance for string columns. Strings with this property must be no longer than 256 characters.
- int8 – This property provides optimized memory and query performance for int columns. Ints with this property must be between -128 and +127 (inclusive)
- int16 – This property provides optimized memory and query performance for int columns. Ints with this property must be between -32768 and +32767 (inclusive)
- ipv4 – This property provides optimized memory, disk and query performance for string columns representing IPv4 addresses (i.e. 192.168.1.1). Strings with this property must be of the form: A.B.C.D where A, B, C and D are in the range of 0-255.
- wkt – Valid only for ‘string’ and ‘bytes’ columns. Indicates that this field contains geospatial geometry objects in Well-Known Text (WKT) or Well-Known Binary (WKB) format.
- primary_key – This property indicates that this column will be part of (or the entire) primary key.
- shard_key – This property indicates that this column will be part of (or the entire) shard key.
- nullable – This property indicates that this column is nullable. However, setting this property is insufficient for making the column nullable. The user must declare the type of the column as a union between its regular type and ‘null’ in the avro schema for the record type in input parameter type_definition. For example, if a column is of type integer and is nullable, then the entry for the column in the avro schema must be: [‘int’, ‘null’]. The C++, C#, Java, and Python APIs have built-in convenience for bypassing setting the avro schema by hand. For those languages, one can use this property as usual and not have to worry about the avro schema for the record.
- dict – This property indicates that this column should be dictionary encoded. It can only be used in conjunction with restricted string (charN), int, long or date columns. Dictionary encoding is best for columns where the cardinality (the number of unique values) is expected to be low. This property can save a large amount of memory.
- init_with_now – For ‘date’, ‘time’, ‘datetime’, or ‘timestamp’ column types, replace empty strings and invalid timestamps with ‘NOW()’ upon insert.
The default value is an empty dict ( {} ).
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- type_id (str) –
- An identifier representing the created type. This type_id can
be used in subsequent calls to
create a table
- type_definition (str) –
- Value of input parameter type_definition.
- label (str) –
- Value of input parameter label.
- properties (dict of str to lists of str) –
- Value of input parameter properties.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_union
(table_name=None, table_names=None, input_column_names=None, output_column_names=None, options={})[source]¶ Merges data from one or more tables with comparable data types into a new table.
The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations, see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples, see Intersect. For limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see Except. For limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views on a single table, creates a single filtered view containing all of the unique records across all of the given filtered data sets.
Non-charN ‘string’ and ‘bytes’ column types cannot be merged, nor can columns marked as store-only.
Parameters
- table_name (str) –
- Name of the table to be created. Has the same naming restrictions as tables.
- table_names (list of str) –
- The list of table names to merge. Must contain the names of one or more existing tables. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- input_column_names (list of lists of str) –
- The list of columns from each of the corresponding input tables. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- output_column_names (list of str) –
- The list of names of the columns to be stored in the output table. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the output table. If the collection provided is non-existent, the collection will be automatically created. If empty, the output table will be a top-level table. The default value is ‘’.
materialize_on_gpu – No longer used. See Resource Management Concepts for information about how resources are managed, Tier Strategy Concepts for how resources are targeted for VRAM, and Tier Strategy Usage for how to specify a table’s priority in VRAM. Allowed values are:
- true
- false
The default value is ‘false’.
mode – If merge_views, then this operation will merge the provided views. All input parameter table_names must be views from the same underlying base table. Allowed values are:
- union_all – Retains all rows from the specified tables.
- union – Retains all unique rows from the specified tables (synonym for union_distinct).
- union_distinct – Retains all unique rows from the specified tables.
- except – Retains all unique rows from the first table that do not appear in the second table (only works on 2 tables).
- except_all – Retains all rows(including duplicates) from the first table that do not appear in the second table (only works on 2 tables).
- intersect – Retains all unique rows that appear in both of the specified tables (only works on 2 tables).
- intersect_all – Retains all rows(including duplicates) that appear in both of the specified tables (only works on 2 tables).
- merge_views – Merge two or more views (or views of views) of the same base data set into a new view. If this mode is selected input parameter input_column_names AND input parameter output_column_names must be empty. The resulting view would match the results of a SQL OR operation, e.g., if filter 1 creates a view using the expression ‘x = 20’ and filter 2 creates a view using the expression ‘x <= 10’, then the merge views operation creates a new view using the expression ‘x = 20 OR x <= 10’.
The default value is ‘union_all’.
chunk_size – Indicates the number of records per chunk to be used for this output table.
create_indexes – Comma-separated list of columns on which to create indexes on the output table. The columns specified must be present in input parameter output_column_names.
ttl – Sets the TTL of the output table specified in input parameter table_name.
persist – If true, then the output table specified in input parameter table_name will be persisted and will not expire unless a ttl is specified. If false, then the output table will be an in-memory table and will expire unless a ttl is specified otherwise. Allowed values are:
- true
- false
The default value is ‘false’.
view_id – ID of view of which this output table is a member. The default value is ‘’.
force_replicated – If true, then the output table specified in input parameter table_name will be replicated even if the source tables are not. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
- info (dict of str to str) –
Additional information. The default value is an empty dict ( {} ). Allowed keys are:
- count – Number of records in the final table
-
GPUdb.
create_user_external
(name=None, options={})[source]¶ Creates a new external user (a user whose credentials are managed by an external LDAP).
Parameters
- name (str) –
- Name of the user to be created. Must exactly match the user’s name in the external LDAP, prefixed with a @. Must not be the same name as an existing user.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
create_user_internal
(name=None, password=None, options={})[source]¶ Creates a new internal user (a user whose credentials are managed by the database system).
Parameters
- name (str) –
- Name of the user to be created. Must contain only lowercase letters, digits, and underscores, and cannot begin with a digit. Must not be the same name as an existing user or role.
- password (str) –
- Initial password of the user to be created. May be an empty string for no password.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- resource_group – Name of an existing resource group to associate with this user
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information. The default value is an empty dict ( {} ). Allowed keys are:
-
GPUdb.
delete_graph
(graph_name=None, options={})[source]¶ Deletes an existing graph from the graph server and/or persist.
Parameters
- graph_name (str) –
- Name of the graph to be deleted.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
delete_persist – If set to true, the graph is removed from the server and persist. If set to false, the graph is removed from the server but is left in persist. The graph can be reloaded from persist if it is recreated with the same ‘graph_name’. Allowed values are:
- true
- false
The default value is ‘true’.
Returns
A dict with the following entries–
- result (bool) –
- Indicates a successful deletion.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
delete_proc
(proc_name=None, options={})[source]¶ Deletes a proc. Any currently running instances of the proc will be killed.
Parameters
- proc_name (str) –
- Name of the proc to be deleted. Must be the name of a currently existing proc.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- proc_name (str) –
- Value of input parameter proc_name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
delete_records
(table_name=None, expressions=None, options={})[source]¶ Deletes record(s) matching the provided criteria from the given table. The record selection criteria can either be one or more input parameter expressions (matching multiple records), a single record identified by record_id options, or all records when using delete_all_records. Note that the three selection criteria are mutually exclusive. This operation cannot be run on a collection or a view. The operation is synchronous meaning that a response will not be available until the request is completely processed and all the matching records are deleted.
Parameters
- table_name (str) –
- Name of the table from which to delete records. The set must be a currently existing table and not a collection or a view.
- expressions (list of str) –
- A list of the actual predicates, one for each select; format should follow the guidelines provided here. Specifying one or more input parameter expressions is mutually exclusive to specifying record_id in the input parameter options. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
global_expression – An optional global expression to reduce the search space of the input parameter expressions. The default value is ‘’.
record_id – A record ID identifying a single record, obtained at the time of
insertion of the record
or by callingget_records_from_collection()
with the return_record_ids option. This option cannot be used to delete records from replicated tables.delete_all_records – If set to true, all records in the table will be deleted. If set to false, then the option is effectively ignored. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- count_deleted (long) –
- Total number of records deleted across all expressions.
- counts_deleted (list of longs) –
- Total number of records deleted per expression.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
delete_resource_group
(name=None, options={})[source]¶ Deletes a resource group.
Parameters
- name (str) –
- Name of the resource group to be deleted.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
delete_role
(name=None, options={})[source]¶ Deletes an existing role.
Parameters
- name (str) –
- Name of the role to be deleted. Must be an existing role.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
delete_user
(name=None, options={})[source]¶ Deletes an existing user.
Parameters
- name (str) –
- Name of the user to be deleted. Must be an existing user.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
execute_proc
(proc_name=None, params={}, bin_params={}, input_table_names=[], input_column_names={}, output_table_names=[], options={})[source]¶ Executes a proc. This endpoint is asynchronous and does not wait for the proc to complete before returning.
Parameters
- proc_name (str) –
- Name of the proc to execute. Must be the name of a currently existing proc.
- params (dict of str to str) –
- A map containing named parameters to pass to the proc. Each key/value pair specifies the name of a parameter and its value. The default value is an empty dict ( {} ).
- bin_params (dict of str to str) –
- A map containing named binary parameters to pass to the proc. Each key/value pair specifies the name of a parameter and its value. The default value is an empty dict ( {} ).
- input_table_names (list of str) –
- Names of the tables containing data to be passed to the proc. Each name specified must be the name of a currently existing table. If no table names are specified, no data will be passed to the proc. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- input_column_names (dict of str to lists of str) –
- Map of table names from input parameter input_table_names to lists of names of columns from those tables that will be passed to the proc. Each column name specified must be the name of an existing column in the corresponding table. If a table name from input parameter input_table_names is not included, all columns from that table will be passed to the proc. The default value is an empty dict ( {} ).
- output_table_names (list of str) –
- Names of the tables to which output data from the proc will be written. If a specified table does not exist, it will automatically be created with the same schema as the corresponding table (by order) from input parameter input_table_names, excluding any primary and shard keys. If a specified table is a non-persistent result table, it must not have primary or shard keys. If no table names are specified, no output data can be returned from the proc. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- cache_input –
A comma-delimited list of table names from input parameter
input_table_names from which input data will be cached for
use in subsequent calls to
execute_proc()
with the use_cached_input option. Cached input data will be retained until the proc status is cleared with theclear_complete
option ofshow_proc_status()
and all proc instances using the cached data have completed. The default value is ‘’. - use_cached_input –
A comma-delimited list of run IDs (as returned from prior
calls to
execute_proc()
) of running or completed proc instances from which input data cached using the cache_input option will be used. Cached input data will not be used for any tables specified in input parameter input_table_names, but data from all other tables cached for the specified run IDs will be passed to the proc. If the same table was cached for multiple specified run IDs, the cached data from the first run ID specified in the list that includes that table will be used. The default value is ‘’. - kifs_input_dirs – A comma-delimited list of KiFS directories whose local files will be made directly accessible to the proc through the API. (All KiFS files, local or not, are also accessible through the file system below the KiFS mount point.) Each name specified must the name of an existing KiFS directory. The default value is ‘’.
- run_tag –
A string that, if not empty, can be used in subsequent calls
to
show_proc_status()
orkill_proc()
to identify the proc instance. The default value is ‘’.
- cache_input –
A comma-delimited list of table names from input parameter
input_table_names from which input data will be cached for
use in subsequent calls to
Returns
A dict with the following entries–
- run_id (str) –
- The run ID of the running proc instance. This may be passed to
show_proc_status()
to obtain status information, orkill_proc()
to kill the proc instance. - info (dict of str to str) –
- Additional information.
-
GPUdb.
execute_sql
(statement=None, offset=0, limit=-9999, encoding='binary', request_schema_str='', data=[], options={})[source]¶ SQL Request
Parameters
- statement (str) –
- SQL statement (query, DML, or DDL) to be executed
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use output parameter has_more_records to see if more records exist in the result to be fetched, and input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records; either ‘binary’ or ‘json’. Allowed values are:
- binary
- json
The default value is ‘binary’.
- request_schema_str (str) –
- Avro schema of input parameter data. The default value is ‘’.
- data (list of str) –
- An array of binary-encoded data for the records to be binded to the SQL query. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
parallel_execution – If false, disables the parallel step execution of the given query. Allowed values are:
- true
- false
The default value is ‘true’.
cost_based_optimization – If false, disables the cost-based optimization of the given query. Allowed values are:
- true
- false
The default value is ‘false’.
plan_cache – If false, disables plan caching for the given query. Allowed values are:
- true
- false
The default value is ‘true’.
rule_based_optimization – If false, disables rule-based rewrite optimizations for the given query. Allowed values are:
- true
- false
The default value is ‘true’.
results_caching – If false, disables caching of the results of the given query. Allowed values are:
- true
- false
The default value is ‘true’.
paging_table – When empty or the specified paging table not exists, the system will create a paging table and return when query output has more records than the user asked. If the paging table exists in the system, the records from the paging table are returned without evaluating the query.
paging_table_ttl – Sets the TTL of the paging table.
distributed_joins – If true, enables the use of distributed joins in servicing the given query. Any query requiring a distributed join will succeed, though hints can be used in the query to change the distribution of the source data to allow the query to succeed. Allowed values are:
- true
- false
The default value is ‘false’.
distributed_operations – If true, enables the use of distributed operations in servicing the given query. Any query requiring a distributed join will succeed, though hints can be used in the query to change the distribution of the source data to allow the query to succeed. Allowed values are:
- true
- false
The default value is ‘false’.
ssq_optimization – If false, scalar subqueries will be translated into joins. Allowed values are:
- true
- false
The default value is ‘true’.
late_materialization – If true, Joins/Filters results will always be materialized ( saved to result tables format). Allowed values are:
- true
- false
The default value is ‘false’.
ttl – Sets the TTL of the intermediate result tables used in query execution.
update_on_existing_pk – Can be used to customize behavior when the updated primary key value already exists as described in
insert_records()
. Allowed values are:- true
- false
The default value is ‘false’.
preserve_dict_encoding – If true, then columns that were dict encoded in the source table will be dict encoded in the projection table. Allowed values are:
- true
- false
The default value is ‘true’.
validate_change_column – When changing a column using alter table, validate the change before applying it. If true, then validate all values. A value too large (or too long) for the new type will prevent any change. If false, then when a value is too large or long, it will be truncated. Allowed values are:
- true – true
- false – false
The default value is ‘true’.
prepare_mode – If true, compiles a query into an execution plan and saves it in query cache. Query execution is not performed and an empty response will be returned to user. Allowed values are:
- true
- false
The default value is ‘false’.
view_id – <DEVELOPER> The default value is ‘’.
no_count – <DEVELOPER> The default value is ‘false’.
Returns
A dict with the following entries–
- count_affected (long) –
- The number of objects/records affected.
- response_schema_str (str) –
- Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
- binary_encoded_response (str) –
- Avro binary encoded response.
- json_encoded_response (str) –
- Avro JSON encoded response.
- total_number_of_records (long) –
- Total/Filtered number of records.
- has_more_records (bool) –
Too many records. Returned a partial set. Allowed values are:
- true
- false
- paging_table (str) –
- Name of the table that has the result records of the query. Valid, when output parameter has_more_records is true (Subject to config.paging_tables_enabled)
- info (dict of str to str) –
Additional information. The default value is an empty dict ( {} ). Allowed keys are:
- count – Number of records in the final table
- record_type (
RecordType
or None) – - A
RecordType
object using which the user can decode the binarydata by usingGPUdbRecord.decode_binary_data()
. If JSON encodingis used, then None.
-
GPUdb.
execute_sql_and_decode
(statement=None, offset=0, limit=-9999, encoding='binary', request_schema_str='', data=[], options={}, record_type=None, force_primitive_return_types=True, get_column_major=True)[source]¶ SQL Request
Parameters
- statement (str) –
- SQL statement (query, DML, or DDL) to be executed
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use output parameter has_more_records to see if more records exist in the result to be fetched, and input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records; either ‘binary’ or ‘json’. Allowed values are:
- binary
- json
The default value is ‘binary’.
- request_schema_str (str) –
- Avro schema of input parameter data. The default value is ‘’.
- data (list of str) –
- An array of binary-encoded data for the records to be binded to the SQL query. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
parallel_execution – If false, disables the parallel step execution of the given query. Allowed values are:
- true
- false
The default value is ‘true’.
cost_based_optimization – If false, disables the cost-based optimization of the given query. Allowed values are:
- true
- false
The default value is ‘false’.
plan_cache – If false, disables plan caching for the given query. Allowed values are:
- true
- false
The default value is ‘true’.
rule_based_optimization – If false, disables rule-based rewrite optimizations for the given query. Allowed values are:
- true
- false
The default value is ‘true’.
results_caching – If false, disables caching of the results of the given query. Allowed values are:
- true
- false
The default value is ‘true’.
paging_table – When empty or the specified paging table not exists, the system will create a paging table and return when query output has more records than the user asked. If the paging table exists in the system, the records from the paging table are returned without evaluating the query.
paging_table_ttl – Sets the TTL of the paging table.
distributed_joins – If true, enables the use of distributed joins in servicing the given query. Any query requiring a distributed join will succeed, though hints can be used in the query to change the distribution of the source data to allow the query to succeed. Allowed values are:
- true
- false
The default value is ‘false’.
distributed_operations – If true, enables the use of distributed operations in servicing the given query. Any query requiring a distributed join will succeed, though hints can be used in the query to change the distribution of the source data to allow the query to succeed. Allowed values are:
- true
- false
The default value is ‘false’.
ssq_optimization – If false, scalar subqueries will be translated into joins. Allowed values are:
- true
- false
The default value is ‘true’.
late_materialization – If true, Joins/Filters results will always be materialized ( saved to result tables format). Allowed values are:
- true
- false
The default value is ‘false’.
ttl – Sets the TTL of the intermediate result tables used in query execution.
update_on_existing_pk – Can be used to customize behavior when the updated primary key value already exists as described in
insert_records()
. Allowed values are:- true
- false
The default value is ‘false’.
preserve_dict_encoding – If true, then columns that were dict encoded in the source table will be dict encoded in the projection table. Allowed values are:
- true
- false
The default value is ‘true’.
validate_change_column – When changing a column using alter table, validate the change before applying it. If true, then validate all values. A value too large (or too long) for the new type will prevent any change. If false, then when a value is too large or long, it will be truncated. Allowed values are:
- true – true
- false – false
The default value is ‘true’.
prepare_mode – If true, compiles a query into an execution plan and saves it in query cache. Query execution is not performed and an empty response will be returned to user. Allowed values are:
- true
- false
The default value is ‘false’.
view_id – <DEVELOPER> The default value is ‘’.
no_count – <DEVELOPER> The default value is ‘false’.
- record_type (
RecordType
or None) – - The record type expected in the results, or None to determinethe appropriate type automatically. If known, providing thismay improve performance in binary mode. Not used in JSON mode.The default value is None.
- force_primitive_return_types (bool) –
- If True, then OrderedDict objects will be returned, where
string sub-type columns will have their values converted back
to strings; for example, the Python datetime structs, used
for datetime type columns would have their values returned as
strings. If False, then
Record
objects will be returned, which for string sub-types, will return native or custom structs; no conversion to string takes place. String conversions, when returning OrderedDicts, incur a speed penalty, and it is strongly recommended to use theRecord
object option instead. If True, but none of the returned columns require a conversion, then the originalRecord
objects will be returned. Default value is True. - get_column_major (bool) –
- Indicates if the decoded records will be transposed to be column-major or returned as is (row-major). Default value is True.
Returns
A dict with the following entries–
- count_affected (long) –
- The number of objects/records affected.
- response_schema_str (str) –
- Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
- total_number_of_records (long) –
- Total/Filtered number of records.
- has_more_records (bool) –
Too many records. Returned a partial set. Allowed values are:
- true
- false
- paging_table (str) –
- Name of the table that has the result records of the query. Valid, when output parameter has_more_records is true (Subject to config.paging_tables_enabled)
- info (dict of str to str) –
Additional information. The default value is an empty dict ( {} ). Allowed keys are:
- count – Number of records in the final table
- records (list of
Record
) – - A list of
Record
objects which contain the decoded records.
-
GPUdb.
filter
(table_name=None, view_name='', expression=None, options={})[source]¶ Filters data based on the specified expression. The results are stored in a result set with the given input parameter view_name.
For details see Expressions.
The response message contains the number of points for which the expression evaluated to be true, which is equivalent to the size of the result view.
Parameters
- table_name (str) –
- Name of the table to filter. This may be the name of a collection, a table, or a view (when chaining queries). If filtering a collection, all child tables where the filter expression is valid will be filtered; the filtered result tables will then be placed in a collection specified by input parameter view_name.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Has the same naming restrictions as tables. The default value is ‘’.
- expression (str) –
- The select expression to filter the specified table. For details see Expressions.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
- view_id – view this filtered-view is part of. The default value is ‘’.
- ttl – Sets the TTL of the view specified in input parameter view_name.
Returns
A dict with the following entries–
- count (long) –
- The number of records that matched the given select expression.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_area
(table_name=None, view_name='', x_column_name=None, x_vector=None, y_column_name=None, y_vector=None, options={})[source]¶ Calculates which objects from a table are within a named area of interest (NAI/polygon). The operation is synchronous, meaning that a response will not be returned until all the matching objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input NAI restriction specification is created with the name input parameter view_name passed in as part of the input.
Parameters
- table_name (str) –
- Name of the table to filter. This may be the name of a collection, a table, or a view (when chaining queries). If filtering a collection, all child tables where the filter expression is valid will be filtered; the filtered result tables will then be placed in a collection specified by input parameter view_name.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Has the same naming restrictions as tables. The default value is ‘’.
- x_column_name (str) –
- Name of the column containing the x values to be filtered.
- x_vector (list of floats) –
- List of x coordinates of the vertices of the polygon representing the area to be filtered. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- y_column_name (str) –
- Name of the column containing the y values to be filtered.
- y_vector (list of floats) –
- List of y coordinates of the vertices of the polygon representing the area to be filtered. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the area filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_area_geometry
(table_name=None, view_name='', column_name=None, x_vector=None, y_vector=None, options={})[source]¶ Calculates which geospatial geometry objects from a table intersect a named area of interest (NAI/polygon). The operation is synchronous, meaning that a response will not be returned until all the matching objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input NAI restriction specification is created with the name input parameter view_name passed in as part of the input.
Parameters
- table_name (str) –
- Name of the table to filter. This may be the name of a collection, a table, or a view (when chaining queries). If filtering a collection, all child tables where the filter expression is valid will be filtered; the filtered result tables will then be placed in a collection specified by input parameter view_name.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Must not be an already existing collection, table or view. The default value is ‘’.
- column_name (str) –
- Name of the geospatial geometry column to be filtered.
- x_vector (list of floats) –
- List of x coordinates of the vertices of the polygon representing the area to be filtered. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- y_vector (list of floats) –
- List of y coordinates of the vertices of the polygon representing the area to be filtered. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the area filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_box
(table_name=None, view_name='', x_column_name=None, min_x=None, max_x=None, y_column_name=None, min_y=None, max_y=None, options={})[source]¶ Calculates how many objects within the given table lie in a rectangular box. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set which satisfies the input NAI restriction specification is also created when a input parameter view_name is passed in as part of the input payload.
Parameters
- table_name (str) –
- Name of the table on which the bounding box operation will be performed. Must be an existing table.
- view_name (str) –
- Optional name of the result view that will be created containing the results of the query. Has the same naming restrictions as tables. The default value is ‘’.
- x_column_name (str) –
- Name of the column on which to perform the bounding box query. Must be a valid numeric column.
- min_x (float) –
- Lower bound for the column chosen by input parameter x_column_name. Must be less than or equal to input parameter max_x.
- max_x (float) –
- Upper bound for input parameter x_column_name. Must be greater than or equal to input parameter min_x.
- y_column_name (str) –
- Name of a column on which to perform the bounding box query. Must be a valid numeric column.
- min_y (float) –
- Lower bound for input parameter y_column_name. Must be less than or equal to input parameter max_y.
- max_y (float) –
- Upper bound for input parameter y_column_name. Must be greater than or equal to input parameter min_y.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the box filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_box_geometry
(table_name=None, view_name='', column_name=None, min_x=None, max_x=None, min_y=None, max_y=None, options={})[source]¶ Calculates which geospatial geometry objects from a table intersect a rectangular box. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set which satisfies the input NAI restriction specification is also created when a input parameter view_name is passed in as part of the input payload.
Parameters
- table_name (str) –
- Name of the table on which the bounding box operation will be performed. Must be an existing table.
- view_name (str) –
- Optional name of the result view that will be created containing the results of the query. Must not be an already existing collection, table or view. The default value is ‘’.
- column_name (str) –
- Name of the geospatial geometry column to be filtered.
- min_x (float) –
- Lower bound for the x-coordinate of the rectangular box. Must be less than or equal to input parameter max_x.
- max_x (float) –
- Upper bound for the x-coordinate of the rectangular box. Must be greater than or equal to input parameter min_x.
- min_y (float) –
- Lower bound for the y-coordinate of the rectangular box. Must be less than or equal to input parameter max_y.
- max_y (float) –
- Upper bound for the y-coordinate of the rectangular box. Must be greater than or equal to input parameter min_y.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the box filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_geometry
(table_name=None, view_name='', column_name=None, input_wkt='', operation=None, options={})[source]¶ Applies a geometry filter against a geospatial geometry column in a given table, collection or view. The filtering geometry is provided by input parameter input_wkt.
Parameters
- table_name (str) –
- Name of the table on which the filter by geometry will be performed. Must be an existing table, collection or view containing a geospatial geometry column.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Has the same naming restrictions as tables. The default value is ‘’.
- column_name (str) –
- Name of the column to be used in the filter. Must be a geospatial geometry column.
- input_wkt (str) –
- A geometry in WKT format that will be used to filter the objects in input parameter table_name. The default value is ‘’.
- operation (str) –
The geometric filtering operation to perform Allowed values are:
- contains – Matches records that contain the given WKT in input parameter input_wkt, i.e. the given WKT is within the bounds of a record’s geometry.
- crosses – Matches records that cross the given WKT.
- disjoint – Matches records that are disjoint from the given WKT.
- equals – Matches records that are the same as the given WKT.
- intersects – Matches records that intersect the given WKT.
- overlaps – Matches records that overlap the given WKT.
- touches – Matches records that touch the given WKT.
- within – Matches records that are within the given WKT.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the geometry filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_list
(table_name=None, view_name='', column_values_map=None, options={})[source]¶ Calculates which records from a table have values in the given list for the corresponding column. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input filter specification is also created if a input parameter view_name is passed in as part of the request.
For example, if a type definition has the columns ‘x’ and ‘y’, then a filter by list query with the column map {“x”:[“10.1”, “2.3”], “y”:[“0.0”, “-31.5”, “42.0”]} will return the count of all data points whose x and y values match both in the respective x- and y-lists, e.g., “x = 10.1 and y = 0.0”, “x = 2.3 and y = -31.5”, etc. However, a record with “x = 10.1 and y = -31.5” or “x = 2.3 and y = 0.0” would not be returned because the values in the given lists do not correspond.
Parameters
- table_name (str) –
- Name of the table to filter. This may be the name of a collection, a table, or a view (when chaining queries). If filtering a collection, all child tables where the filter expression is valid will be filtered; the filtered result tables will then be placed in a collection specified by input parameter view_name.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Has the same naming restrictions as tables. The default value is ‘’.
- column_values_map (dict of str to lists of str) –
- List of values for the corresponding column in the table
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
filter_mode – String indicating the filter mode, either ‘in_list’ or ‘not_in_list’. Allowed values are:
- in_list – The filter will match all items that are in the provided list(s).
- not_in_list – The filter will match all items that are not in the provided list(s).
The default value is ‘in_list’.
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the list filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_radius
(table_name=None, view_name='', x_column_name=None, x_center=None, y_column_name=None, y_center=None, radius=None, options={})[source]¶ Calculates which objects from a table lie within a circle with the given radius and center point (i.e. circular NAI). The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input circular NAI restriction specification is also created if a input parameter view_name is passed in as part of the request.
For track data, all track points that lie within the circle plus one point on either side of the circle (if the track goes beyond the circle) will be included in the result.
Parameters
- table_name (str) –
- Name of the table on which the filter by radius operation will be performed. Must be an existing table.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Has the same naming restrictions as tables. The default value is ‘’.
- x_column_name (str) –
- Name of the column to be used for the x-coordinate (the longitude) of the center.
- x_center (float) –
- Value of the longitude of the center. Must be within [-180.0, 180.0]. The minimum allowed value is -180. The maximum allowed value is 180.
- y_column_name (str) –
- Name of the column to be used for the y-coordinate-the latitude-of the center.
- y_center (float) –
- Value of the latitude of the center. Must be within [-90.0, 90.0]. The minimum allowed value is -90. The maximum allowed value is 90.
- radius (float) –
- The radius of the circle within which the search will be performed. Must be a non-zero positive value. It is in meters; so, for example, a value of ‘42000’ means 42 km. The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the radius filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_radius_geometry
(table_name=None, view_name='', column_name=None, x_center=None, y_center=None, radius=None, options={})[source]¶ Calculates which geospatial geometry objects from a table intersect a circle with the given radius and center point (i.e. circular NAI). The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new resultant set (view) which satisfies the input circular NAI restriction specification is also created if a input parameter view_name is passed in as part of the request.
Parameters
- table_name (str) –
- Name of the table on which the filter by radius operation will be performed. Must be an existing table.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Must not be an already existing collection, table or view. The default value is ‘’.
- column_name (str) –
- Name of the geospatial geometry column to be filtered.
- x_center (float) –
- Value of the longitude of the center. Must be within [-180.0, 180.0]. The minimum allowed value is -180. The maximum allowed value is 180.
- y_center (float) –
- Value of the latitude of the center. Must be within [-90.0, 90.0]. The minimum allowed value is -90. The maximum allowed value is 90.
- radius (float) –
- The radius of the circle within which the search will be performed. Must be a non-zero positive value. It is in meters; so, for example, a value of ‘42000’ means 42 km. The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the radius filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_range
(table_name=None, view_name='', column_name=None, lower_bound=None, upper_bound=None, options={})[source]¶ Calculates which objects from a table have a column that is within the given bounds. An object from the table identified by input parameter table_name is added to the view input parameter view_name if its column is within [input parameter lower_bound, input parameter upper_bound] (inclusive). The operation is synchronous. The response provides a count of the number of objects which passed the bound filter. Although this functionality can also be accomplished with the standard filter function, it is more efficient.
For track objects, the count reflects how many points fall within the given bounds (which may not include all the track points of any given track).
Parameters
- table_name (str) –
- Name of the table on which the filter by range operation will be performed. Must be an existing table.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Has the same naming restrictions as tables. The default value is ‘’.
- column_name (str) –
- Name of a column on which the operation would be applied.
- lower_bound (float) –
- Value of the lower bound (inclusive).
- upper_bound (float) –
- Value of the upper bound (inclusive).
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the range filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_series
(table_name=None, view_name='', track_id=None, target_track_ids=None, options={})[source]¶ Filters objects matching all points of the given track (works only on track type data). It allows users to specify a particular track to find all other points in the table that fall within specified ranges-spatial and temporal-of all points of the given track. Additionally, the user can specify another track to see if the two intersect (or go close to each other within the specified ranges). The user also has the flexibility of using different metrics for the spatial distance calculation: Euclidean (flat geometry) or Great Circle (spherical geometry to approximate the Earth’s surface distances). The filtered points are stored in a newly created result set. The return value of the function is the number of points in the resultant set (view).
This operation is synchronous, meaning that a response will not be returned until all the objects are fully available.
Parameters
- table_name (str) –
- Name of the table on which the filter by track operation will be performed. Must be a currently existing table with a track present.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Has the same naming restrictions as tables. The default value is ‘’.
- track_id (str) –
- The ID of the track which will act as the filtering points. Must be an existing track within the given table.
- target_track_ids (list of str) –
- Up to one track ID to intersect with the “filter” track. If any provided, it must be an valid track ID within the given set. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
- spatial_radius – A positive number passed as a string representing the radius of the search area centered around each track point’s geospatial coordinates. The value is interpreted in meters. Required parameter.
- time_radius – A positive number passed as a string representing the maximum allowable time difference between the timestamps of a filtered object and the given track’s points. The value is interpreted in seconds. Required parameter.
- spatial_distance_metric –
A string representing the coordinate system to use for the
spatial search criteria. Acceptable values are ‘euclidean’
and ‘great_circle’. Optional parameter; default is
‘euclidean’.
Allowed values are:
- euclidean
- great_circle
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the series filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_string
(table_name=None, view_name='', expression=None, mode=None, column_names=None, options={})[source]¶ Calculates which objects from a table, collection, or view match a string expression for the given string columns. The options ‘case_sensitive’ can be used to modify the behavior for all modes except ‘search’. For ‘search’ mode details and limitations, see Full Text Search.
Parameters
- table_name (str) –
- Name of the table on which the filter operation will be performed. Must be an existing table, collection or view.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Has the same naming restrictions as tables. The default value is ‘’.
- expression (str) –
- The expression with which to filter the table.
- mode (str) –
The string filtering mode to apply. See below for details. Allowed values are:
- search – Full text search query with wildcards and boolean operators. Note that for this mode, no column can be specified in input parameter column_names; all string columns of the table that have text search enabled will be searched.
- equals – Exact whole-string match (accelerated).
- contains – Partial substring match (not accelerated). If the column is a string type (non-charN) and the number of records is too large, it will return 0.
- starts_with – Strings that start with the given expression (not accelerated). If the column is a string type (non-charN) and the number of records is too large, it will return 0.
- regex – Full regular expression search (not accelerated). If the column is a string type (non-charN) and the number of records is too large, it will return 0.
- column_names (list of str) –
- List of columns on which to apply the filter. Ignored for ‘search’ mode. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
case_sensitive – If ‘false’ then string filtering will ignore case. Does not apply to ‘search’ mode. Allowed values are:
- true
- false
The default value is ‘true’.
Returns
A dict with the following entries–
- count (long) –
- The number of records that passed the string filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_table
(table_name=None, view_name='', column_name=None, source_table_name=None, source_table_column_name=None, options={})[source]¶ Filters objects in one table based on objects in another table. The user must specify matching column types from the two tables (i.e. the target table from which objects will be filtered and the source table based on which the filter will be created); the column names need not be the same. If a input parameter view_name is specified, then the filtered objects will then be put in a newly created view. The operation is synchronous, meaning that a response will not be returned until all objects are fully available in the result view. The return value contains the count (i.e. the size) of the resulting view.
Parameters
- table_name (str) –
- Name of the table whose data will be filtered. Must be an existing table.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Has the same naming restrictions as tables. The default value is ‘’.
- column_name (str) –
- Name of the column by whose value the data will be filtered from the table designated by input parameter table_name.
- source_table_name (str) –
- Name of the table whose data will be compared against in the table called input parameter table_name. Must be an existing table.
- source_table_column_name (str) –
- Name of the column in the input parameter source_table_name whose values will be used as the filter for table input parameter table_name. Must be a geospatial geometry column if in ‘spatial’ mode; otherwise, Must match the type of the input parameter column_name.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
filter_mode – String indicating the filter mode, either in_table or not_in_table. Allowed values are:
- in_table
- not_in_table
The default value is ‘in_table’.
mode – Mode - should be either spatial or normal. Allowed values are:
- normal
- spatial
The default value is ‘normal’.
buffer – Buffer size, in meters. Only relevant for spatial mode. The default value is ‘0’.
buffer_method – Method used to buffer polygons. Only relevant for spatial mode. Allowed values are:
- geos – Use geos 1 edge per corner algorithm
The default value is ‘normal’.
max_partition_size – Maximum number of points in a partition. Only relevant for spatial mode. The default value is ‘0’.
max_partition_score – Maximum number of points * edges in a partition. Only relevant for spatial mode. The default value is ‘8000000’.
x_column_name – Name of column containing x value of point being filtered in spatial mode. The default value is ‘x’.
y_column_name – Name of column containing y value of point being filtered in spatial mode. The default value is ‘y’.
Returns
A dict with the following entries–
- count (long) –
- The number of records in input parameter table_name that have input parameter column_name values matching input parameter source_table_column_name values in input parameter source_table_name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
filter_by_value
(table_name=None, view_name='', is_string=None, value=0, value_str='', column_name=None, options={})[source]¶ Calculates which objects from a table has a particular value for a particular column. The input parameters provide a way to specify either a String or a Double valued column and a desired value for the column on which the filter is performed. The operation is synchronous, meaning that a response will not be returned until all the objects are fully available. The response payload provides the count of the resulting set. A new result view which satisfies the input filter restriction specification is also created with a view name passed in as part of the input payload. Although this functionality can also be accomplished with the standard filter function, it is more efficient.
Parameters
- table_name (str) –
- Name of an existing table on which to perform the calculation.
- view_name (str) –
- If provided, then this will be the name of the view containing the results. Has the same naming restrictions as tables. The default value is ‘’.
- is_string (bool) –
- Indicates whether the value being searched for is string or numeric.
- value (float) –
- The value to search for. The default value is 0.
- value_str (str) –
- The string value to search for. The default value is ‘’.
- column_name (str) –
- Name of a column on which the filter by value would be applied.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- collection_name – Name of a collection which is to contain the newly created view. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created view will be top-level.
Returns
A dict with the following entries–
- count (long) –
- The number of records passing the value filter.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
get_job
(job_id=None, options={})[source]¶ Get the status and result of asynchronously running job. See the
create_job()
for starting an asynchronous job. Some fields of the response are filled only after the submitted job has finished execution.Parameters
- job_id (long) –
- A unique identifier for the job whose status and result is to be fetched.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- endpoint (str) –
- The endpoint which is being executed asynchronously. E.g. ‘/alter/table’.
- job_status (str) –
Status of the submitted job. Allowed values are:
- RUNNING – The job is currently executing.
- DONE – The job execution has successfully completed and the response is included in the output parameter job_response or output parameter job_response_str field
- ERROR – The job was attempted, but an error was encountered. The output parameter status_map contains the details of the error in error_message
- CANCELLED – Job cancellation was requested while the execution was in progress.
- running (bool) –
- True if the end point is still executing.
- progress (int) –
- Approximate percentage of the job completed.
- successful (bool) –
- True if the job execution completed and no errors were encountered.
- response_encoding (str) –
The encoding of the job result (contained in output parameter job_response or output parameter job_response_str. Allowed values are:
- binary – The job result is binary-encoded. It is contained in output parameter job_response.
- json – The job result is json-encoded. It is contained in output parameter job_response_str.
- job_response (str) –
- The binary-encoded response of the job. This field is populated only when the job has completed and output parameter response_encoding is binary
- job_response_str (str) –
- The json-encoded response of the job. This field is populated only when the job has completed and output parameter response_encoding is json
- status_map (dict of str to str) –
Map of various status strings for the executed job. Allowed keys are:
- error_message – Explains what error occurred while running the job asynchronously. This entry only exists when the job status is ERROR.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
get_records
(table_name=None, offset=0, limit=-9999, encoding='binary', options={}, get_record_type=True)[source]¶ Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column. This operation can be performed on tables, views, or on homogeneous collections (collections containing tables of all the same type). Records can be returned encoded as binary, json or geojson.
This operation supports paging through the data via the input parameter offset and input parameter limit parameters. Note that when paging through a table, if the table (or the underlying table in case of a view) is updated (records are inserted, deleted or modified) the records retrieved may differ between calls based on the updates applied.
Parameters
- table_name (str) –
- Name of the table from which the records will be fetched. Must be a table, view or homogeneous collection.
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use output parameter has_more_records to see if more records exist in the result to be fetched, and input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records. Allowed values are:
- binary
- json
- geojson
The default value is ‘binary’.
- options (dict of str to str) –
The default value is an empty dict ( {} ). Allowed keys are:
expression – Optional filter expression to apply to the table.
fast_index_lookup – Indicates if indexes should be used to perform the lookup for a given expression if possible. Only applicable if there is no sorting, the expression contains only equivalence comparisons based on existing tables indexes and the range of requested values is from [0 to END_OF_SET]. Allowed values are:
- true
- false
The default value is ‘true’.
sort_by – Optional column that the data should be sorted by. Empty by default (i.e. no sorting is applied).
sort_order – String indicating how the returned values should be sorted - ascending or descending. If sort_order is provided, sort_by has to be provided. Allowed values are:
- ascending
- descending
The default value is ‘ascending’.
- get_record_type (bool) –
- If True, deduce and return the record type for the returned records. Default is True.
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
type_name (str)
- type_schema (str) –
- Avro schema of output parameter records_binary or output parameter records_json
- records_binary (list of str) –
- If the input parameter encoding was ‘binary’, then this list contains the binary encoded records retrieved from the table, otherwise not populated.
- records_json (list of str) –
- If the input parameter encoding was ‘json’, then this list contains the JSON encoded records retrieved from the table. If the input parameter encoding was ‘geojson’ this list contains a single entry consisting of a GeoJSON FeatureCollection containing a feature per record. Otherwise not populated.
- total_number_of_records (long) –
- Total/Filtered number of records.
- has_more_records (bool) –
- Too many records. Returned a partial set.
- info (dict of str to str) –
- Additional information.
- record_type (
RecordType
or None) – - A
RecordType
object using which the user can decode the binarydata by usingGPUdbRecord.decode_binary_data()
. Available only if get_record_type is True.
-
GPUdb.
get_records_and_decode
(table_name=None, offset=0, limit=-9999, encoding='binary', options={}, record_type=None, force_primitive_return_types=True)[source]¶ Retrieves records from a given table, optionally filtered by an expression and/or sorted by a column. This operation can be performed on tables, views, or on homogeneous collections (collections containing tables of all the same type). Records can be returned encoded as binary, json or geojson.
This operation supports paging through the data via the input parameter offset and input parameter limit parameters. Note that when paging through a table, if the table (or the underlying table in case of a view) is updated (records are inserted, deleted or modified) the records retrieved may differ between calls based on the updates applied.
Parameters
- table_name (str) –
- Name of the table from which the records will be fetched. Must be a table, view or homogeneous collection.
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use output parameter has_more_records to see if more records exist in the result to be fetched, and input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records. Allowed values are:
- binary
- json
- geojson
The default value is ‘binary’.
- options (dict of str to str) –
The default value is an empty dict ( {} ). Allowed keys are:
expression – Optional filter expression to apply to the table.
fast_index_lookup – Indicates if indexes should be used to perform the lookup for a given expression if possible. Only applicable if there is no sorting, the expression contains only equivalence comparisons based on existing tables indexes and the range of requested values is from [0 to END_OF_SET]. Allowed values are:
- true
- false
The default value is ‘true’.
sort_by – Optional column that the data should be sorted by. Empty by default (i.e. no sorting is applied).
sort_order – String indicating how the returned values should be sorted - ascending or descending. If sort_order is provided, sort_by has to be provided. Allowed values are:
- ascending
- descending
The default value is ‘ascending’.
- record_type (
RecordType
or None) – - The record type expected in the results, or None to determinethe appropriate type automatically. If known, providing thismay improve performance in binary mode. Not used in JSON mode.The default value is None.
- force_primitive_return_types (bool) –
- If True, then OrderedDict objects will be returned, where
string sub-type columns will have their values converted back
to strings; for example, the Python datetime structs, used
for datetime type columns would have their values returned as
strings. If False, then
Record
objects will be returned, which for string sub-types, will return native or custom structs; no conversion to string takes place. String conversions, when returning OrderedDicts, incur a speed penalty, and it is strongly recommended to use theRecord
object option instead. If True, but none of the returned columns require a conversion, then the originalRecord
objects will be returned. Default value is True.
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
type_name (str)
- type_schema (str) –
- Avro schema of output parameter records_binary or output parameter records_json
- total_number_of_records (long) –
- Total/Filtered number of records.
- has_more_records (bool) –
- Too many records. Returned a partial set.
- info (dict of str to str) –
- Additional information.
- records (list of
Record
) – - A list of
Record
objects which contain the decoded records.
-
GPUdb.
get_records_by_column
(table_name=None, column_names=None, offset=0, limit=-9999, encoding='binary', options={})[source]¶ For a given table, retrieves the values from the requested column(s). Maps of column name to the array of values as well as the column data type are returned. This endpoint supports pagination with the input parameter offset and input parameter limit parameters.
Window functions, which can perform operations like moving averages, are available through this endpoint as well as
create_projection()
.When using pagination, if the table (or the underlying table in the case of a view) is modified (records are inserted, updated, or deleted) during a call to the endpoint, the records or values retrieved may differ between calls based on the type of the update, e.g., the contiguity across pages cannot be relied upon.
If input parameter table_name is empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
Parameters
- table_name (str) –
- Name of the table on which this operation will be performed. An empty table name retrieves one record from a single-row virtual table, where columns specified should be constants or constant expressions. The table cannot be a parent set.
- column_names (list of str) –
- The list of column values to retrieve. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use output parameter has_more_records to see if more records exist in the result to be fetched, and input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records; either ‘binary’ or ‘json’. Allowed values are:
- binary
- json
The default value is ‘binary’.
- options (dict of str to str) –
The default value is an empty dict ( {} ). Allowed keys are:
expression – Optional filter expression to apply to the table.
sort_by – Optional column that the data should be sorted by. Used in conjunction with sort_order. The order_by option can be used in lieu of sort_by / sort_order. The default value is ‘’.
sort_order – String indicating how the returned values should be sorted - ascending or descending. If sort_order is provided, sort_by has to be provided. Allowed values are:
- ascending
- descending
The default value is ‘ascending’.
order_by – Comma-separated list of the columns to be sorted by as well as the sort direction, e.g., ‘timestamp asc, x desc’. The default value is ‘’.
convert_wkts_to_wkbs – If true, then WKT string columns will be returned as WKB bytes. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- table_name (str) –
- The same table name as was passed in the parameter list.
- response_schema_str (str) –
- Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
- binary_encoded_response (str) –
- Avro binary encoded response.
- json_encoded_response (str) –
- Avro JSON encoded response.
- total_number_of_records (long) –
- Total/Filtered number of records.
- has_more_records (bool) –
- Too many records. Returned a partial set.
- info (dict of str to str) –
- Additional information.
- record_type (
RecordType
or None) – - A
RecordType
object using which the user can decode the binarydata by usingGPUdbRecord.decode_binary_data()
. If JSON encodingis used, then None.
-
GPUdb.
get_records_by_column_and_decode
(table_name=None, column_names=None, offset=0, limit=-9999, encoding='binary', options={}, record_type=None, force_primitive_return_types=True, get_column_major=True)[source]¶ For a given table, retrieves the values from the requested column(s). Maps of column name to the array of values as well as the column data type are returned. This endpoint supports pagination with the input parameter offset and input parameter limit parameters.
Window functions, which can perform operations like moving averages, are available through this endpoint as well as
create_projection()
.When using pagination, if the table (or the underlying table in the case of a view) is modified (records are inserted, updated, or deleted) during a call to the endpoint, the records or values retrieved may differ between calls based on the type of the update, e.g., the contiguity across pages cannot be relied upon.
If input parameter table_name is empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see: dynamic schemas documentation.
Parameters
- table_name (str) –
- Name of the table on which this operation will be performed. An empty table name retrieves one record from a single-row virtual table, where columns specified should be constants or constant expressions. The table cannot be a parent set.
- column_names (list of str) –
- The list of column values to retrieve. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the maximum number of results allowed by the server should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use output parameter has_more_records to see if more records exist in the result to be fetched, and input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records; either ‘binary’ or ‘json’. Allowed values are:
- binary
- json
The default value is ‘binary’.
- options (dict of str to str) –
The default value is an empty dict ( {} ). Allowed keys are:
expression – Optional filter expression to apply to the table.
sort_by – Optional column that the data should be sorted by. Used in conjunction with sort_order. The order_by option can be used in lieu of sort_by / sort_order. The default value is ‘’.
sort_order – String indicating how the returned values should be sorted - ascending or descending. If sort_order is provided, sort_by has to be provided. Allowed values are:
- ascending
- descending
The default value is ‘ascending’.
order_by – Comma-separated list of the columns to be sorted by as well as the sort direction, e.g., ‘timestamp asc, x desc’. The default value is ‘’.
convert_wkts_to_wkbs – If true, then WKT string columns will be returned as WKB bytes. Allowed values are:
- true
- false
The default value is ‘false’.
- record_type (
RecordType
or None) – - The record type expected in the results, or None to determinethe appropriate type automatically. If known, providing thismay improve performance in binary mode. Not used in JSON mode.The default value is None.
- force_primitive_return_types (bool) –
- If True, then OrderedDict objects will be returned, where
string sub-type columns will have their values converted back
to strings; for example, the Python datetime structs, used
for datetime type columns would have their values returned as
strings. If False, then
Record
objects will be returned, which for string sub-types, will return native or custom structs; no conversion to string takes place. String conversions, when returning OrderedDicts, incur a speed penalty, and it is strongly recommended to use theRecord
object option instead. If True, but none of the returned columns require a conversion, then the originalRecord
objects will be returned. Default value is True. - get_column_major (bool) –
- Indicates if the decoded records will be transposed to be column-major or returned as is (row-major). Default value is True.
Returns
A dict with the following entries–
- table_name (str) –
- The same table name as was passed in the parameter list.
- response_schema_str (str) –
- Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
- total_number_of_records (long) –
- Total/Filtered number of records.
- has_more_records (bool) –
- Too many records. Returned a partial set.
- info (dict of str to str) –
- Additional information.
- records (list of
Record
) – - A list of
Record
objects which contain the decoded records.
-
GPUdb.
get_records_by_series
(table_name=None, world_table_name=None, offset=0, limit=250, encoding='binary', options={})[source]¶ Retrieves the complete series/track records from the given input parameter world_table_name based on the partial track information contained in the input parameter table_name.
This operation supports paging through the data via the input parameter offset and input parameter limit parameters.
In contrast to
get_records()
this returns records grouped by series/track. So if input parameter offset is 0 and input parameter limit is 5 this operation would return the first 5 series/tracks in input parameter table_name. Each series/track will be returned sorted by their TIMESTAMP column.Parameters
- table_name (str) –
- Name of the collection/table/view for which series/tracks will be fetched.
- world_table_name (str) –
- Name of the table containing the complete series/track information to be returned for the tracks present in the input parameter table_name. Typically this is used when retrieving series/tracks from a view (which contains partial series/tracks) but the user wants to retrieve the entire original series/tracks. Can be blank.
- offset (int) –
- A positive integer indicating the number of initial series/tracks to skip (useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (int) –
- A positive integer indicating the maximum number of series/tracks to be returned. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The default value is 250.
- encoding (str) –
Specifies the encoding for returned records; either ‘binary’ or ‘json’. Allowed values are:
- binary
- json
The default value is ‘binary’.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- table_names (list of str) –
- The table name (one per series/track) of the returned series/tracks.
- type_names (list of str) –
- The type IDs (one per series/track) of the returned series/tracks. This is useful when input parameter table_name is a collection and the returned series/tracks belong to tables with different types.
- type_schemas (list of str) –
- The type schemas (one per series/track) of the returned series/tracks.
- list_records_binary (list of lists of str) –
- If the encoding parameter of the request was ‘binary’ then this list-of-lists contains the binary encoded records for each object (inner list) in each series/track (outer list). Otherwise, empty list-of-lists.
- list_records_json (list of lists of str) –
- If the encoding parameter of the request was ‘json’ then this list-of-lists contains the json encoded records for each object (inner list) in each series/track (outer list). Otherwise, empty list-of-lists.
- info (dict of str to str) –
- Additional information.
- record_types (list of
RecordType
) – - A list of
RecordType
objects using which the user can decode the binarydata by usingGPUdbRecord.decode_binary_data()
per record.
-
GPUdb.
get_records_by_series_and_decode
(table_name=None, world_table_name=None, offset=0, limit=250, encoding='binary', options={}, force_primitive_return_types=True)[source]¶ Retrieves the complete series/track records from the given input parameter world_table_name based on the partial track information contained in the input parameter table_name.
This operation supports paging through the data via the input parameter offset and input parameter limit parameters.
In contrast to
get_records()
this returns records grouped by series/track. So if input parameter offset is 0 and input parameter limit is 5 this operation would return the first 5 series/tracks in input parameter table_name. Each series/track will be returned sorted by their TIMESTAMP column.Parameters
- table_name (str) –
- Name of the collection/table/view for which series/tracks will be fetched.
- world_table_name (str) –
- Name of the table containing the complete series/track information to be returned for the tracks present in the input parameter table_name. Typically this is used when retrieving series/tracks from a view (which contains partial series/tracks) but the user wants to retrieve the entire original series/tracks. Can be blank.
- offset (int) –
- A positive integer indicating the number of initial series/tracks to skip (useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (int) –
- A positive integer indicating the maximum number of series/tracks to be returned. Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The default value is 250.
- encoding (str) –
Specifies the encoding for returned records; either ‘binary’ or ‘json’. Allowed values are:
- binary
- json
The default value is ‘binary’.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
- force_primitive_return_types (bool) –
- If True, then OrderedDict objects will be returned, where
string sub-type columns will have their values converted back
to strings; for example, the Python datetime structs, used
for datetime type columns would have their values returned as
strings. If False, then
Record
objects will be returned, which for string sub-types, will return native or custom structs; no conversion to string takes place. String conversions, when returning OrderedDicts, incur a speed penalty, and it is strongly recommended to use theRecord
object option instead. If True, but none of the returned columns require a conversion, then the originalRecord
objects will be returned. Default value is True.
Returns
A dict with the following entries–
- table_names (list of str) –
- The table name (one per series/track) of the returned series/tracks.
- type_names (list of str) –
- The type IDs (one per series/track) of the returned series/tracks. This is useful when input parameter table_name is a collection and the returned series/tracks belong to tables with different types.
- type_schemas (list of str) –
- The type schemas (one per series/track) of the returned series/tracks.
- info (dict of str to str) –
- Additional information.
- records (list of list of
Record
) – - A list of list of
Record
objects which contain the decoded records.
-
GPUdb.
get_records_from_collection
(table_name=None, offset=0, limit=-9999, encoding='binary', options={})[source]¶ Retrieves records from a collection. The operation can optionally return the record IDs which can be used in certain queries such as
delete_records()
.This operation supports paging through the data via the input parameter offset and input parameter limit parameters.
Note that when using the Java API, it is not possible to retrieve records from join tables using this operation.
Parameters
- table_name (str) –
- Name of the collection or table from which records are to be retrieved. Must be an existing collection or table.
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the max number of results should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records; either ‘binary’ or ‘json’. Allowed values are:
- binary
- json
The default value is ‘binary’.
- options (dict of str to str) –
The default value is an empty dict ( {} ). Allowed keys are:
return_record_ids – If ‘true’ then return the internal record ID along with each returned record. Default is ‘false’. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
- type_names (list of str) –
- The type IDs of the corresponding records in output parameter records_binary or output parameter records_json. This is useful when input parameter table_name is a heterogeneous collection (collections containing tables of different types).
- records_binary (list of str) –
- If the encoding parameter of the request was ‘binary’ then this list contains the binary encoded records retrieved from the table/collection. Otherwise, empty list.
- records_json (list of str) –
- If the encoding parameter of the request was ‘json’, then this list contains the JSON encoded records retrieved from the table/collection. Otherwise, empty list.
- record_ids (list of str) –
- If the ‘return_record_ids’ option of the request was ‘true’, then this list contains the internal ID for each object. Otherwise it will be empty.
- info (dict of str to str) –
Additional information. The default value is an empty dict ( {} ). Allowed keys are:
- total_number_of_records – Total number of records.
- has_more_records –
Too many records. Returned a partial set.
Allowed values are:
- true
- false
- record_types (list of
RecordType
) – - A list of
RecordType
objects using which the user can decode the binarydata by usingGPUdbRecord.decode_binary_data()
per record.
-
GPUdb.
get_records_from_collection_and_decode
(table_name=None, offset=0, limit=-9999, encoding='binary', options={}, force_primitive_return_types=True)[source]¶ Retrieves records from a collection. The operation can optionally return the record IDs which can be used in certain queries such as
delete_records()
.This operation supports paging through the data via the input parameter offset and input parameter limit parameters.
Note that when using the Java API, it is not possible to retrieve records from join tables using this operation.
Parameters
- table_name (str) –
- Name of the collection or table from which records are to be retrieved. Must be an existing collection or table.
- offset (long) –
- A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The default value is 0.The minimum allowed value is 0. The maximum allowed value is MAX_INT.
- limit (long) –
- A positive integer indicating the maximum number of results to be returned, or END_OF_SET (-9999) to indicate that the max number of results should be returned. The number of records returned will never exceed the server’s own limit, defined by the max_get_records_size parameter in the server configuration. Use input parameter offset & input parameter limit to request subsequent pages of results. The default value is -9999.
- encoding (str) –
Specifies the encoding for returned records; either ‘binary’ or ‘json’. Allowed values are:
- binary
- json
The default value is ‘binary’.
- options (dict of str to str) –
The default value is an empty dict ( {} ). Allowed keys are:
return_record_ids – If ‘true’ then return the internal record ID along with each returned record. Default is ‘false’. Allowed values are:
- true
- false
The default value is ‘false’.
- force_primitive_return_types (bool) –
- If True, then OrderedDict objects will be returned, where
string sub-type columns will have their values converted back
to strings; for example, the Python datetime structs, used
for datetime type columns would have their values returned as
strings. If False, then
Record
objects will be returned, which for string sub-types, will return native or custom structs; no conversion to string takes place. String conversions, when returning OrderedDicts, incur a speed penalty, and it is strongly recommended to use theRecord
object option instead. If True, but none of the returned columns require a conversion, then the originalRecord
objects will be returned. Default value is True.
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
- type_names (list of str) –
- The type IDs of the corresponding records in output parameter records_binary or output parameter records_json. This is useful when input parameter table_name is a heterogeneous collection (collections containing tables of different types).
- record_ids (list of str) –
- If the ‘return_record_ids’ option of the request was ‘true’, then this list contains the internal ID for each object. Otherwise it will be empty.
- info (dict of str to str) –
Additional information. The default value is an empty dict ( {} ). Allowed keys are:
- total_number_of_records – Total number of records.
- has_more_records –
Too many records. Returned a partial set.
Allowed values are:
- true
- false
- records (list of
Record
) – - A list of
Record
objects which contain the decoded records.
-
GPUdb.
grant_permission_proc
(name=None, permission=None, proc_name=None, options={})[source]¶ Grants a proc-level permission to a user or role.
Parameters
- name (str) –
- Name of the user or role to which the permission will be granted. Must be an existing user or role.
- permission (str) –
Permission to grant to the user or role. Allowed values are:
- proc_execute – Execute access to the proc.
- proc_name (str) –
- Name of the proc to which the permission grants access. Must be an existing proc, or an empty string to grant access to all procs.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- permission (str) –
- Value of input parameter permission.
- proc_name (str) –
- Value of input parameter proc_name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
grant_permission_system
(name=None, permission=None, options={})[source]¶ Grants a system-level permission to a user or role.
Parameters
- name (str) –
- Name of the user or role to which the permission will be granted. Must be an existing user or role.
- permission (str) –
Permission to grant to the user or role. Allowed values are:
- system_admin – Full access to all data and system functions.
- system_user_admin – Access to administer users and roles that do not have system_admin permission.
- system_write – Read and write access to all tables.
- system_read – Read-only access to all tables.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- permission (str) –
- Value of input parameter permission.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
grant_permission_table
(name=None, permission=None, table_name=None, filter_expression='', options={})[source]¶ Grants a table-level permission to a user or role.
Parameters
- name (str) –
- Name of the user or role to which the permission will be granted. Must be an existing user or role.
- permission (str) –
Permission to grant to the user or role. Allowed values are:
- table_admin – Full read/write and administrative access to the table.
- table_insert – Insert access to the table.
- table_update – Update access to the table.
- table_delete – Delete access to the table.
- table_read – Read access to the table.
- table_name (str) –
- Name of the table to which the permission grants access. Must be an existing table, collection, or view. If a collection, the permission also applies to tables and views in the collection.
- filter_expression (str) –
- Optional filter expression to apply to this grant. Only rows that match the filter will be affected. The default value is ‘’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- columns – Apply security to these columns, comma-separated. The default value is ‘’.
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- permission (str) –
- Value of input parameter permission.
- table_name (str) –
- Value of input parameter table_name.
- filter_expression (str) –
- Value of input parameter filter_expression.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
grant_role
(role=None, member=None, options={})[source]¶ Grants membership in a role to a user or role.
Parameters
- role (str) –
- Name of the role in which membership will be granted. Must be an existing role.
- member (str) –
- Name of the user or role that will be granted membership in input parameter role. Must be an existing user or role.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- role (str) –
- Value of input parameter role.
- member (str) –
- Value of input parameter member.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
has_proc
(proc_name=None, options={})[source]¶ Checks the existence of a proc with the given name.
Parameters
- proc_name (str) –
- Name of the proc to check for existence.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- proc_name (str) –
- Value of input parameter proc_name
- proc_exists (bool) –
Indicates whether the proc exists or not. Allowed values are:
- true
- false
- info (dict of str to str) –
- Additional information.
-
GPUdb.
has_table
(table_name=None, options={})[source]¶ Checks for the existence of a table with the given name.
Parameters
- table_name (str) –
- Name of the table to check for existence.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name
- table_exists (bool) –
Indicates whether the table exists or not. Allowed values are:
- true
- false
- info (dict of str to str) –
- Additional information.
-
GPUdb.
has_type
(type_id=None, options={})[source]¶ Check for the existence of a type.
Parameters
- type_id (str) –
- Id of the type returned in response to
create_type()
request. - options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- type_id (str) –
- Value of input parameter type_id.
- type_exists (bool) –
Indicates whether the type exists or not. Allowed values are:
- true
- false
- info (dict of str to str) –
- Additional information.
-
GPUdb.
insert_records
(table_name=None, data=None, list_encoding=None, options={}, record_type=None)[source]¶ Adds multiple records to the specified table. The operation is synchronous, meaning that a response will not be returned until all the records are fully inserted and available. The response payload provides the counts of the number of records actually inserted and/or updated, and can provide the unique identifier of each added record.
The input parameter options parameter can be used to customize this function’s behavior.
The update_on_existing_pk option specifies the record collision policy for inserting into a table with a primary key, but is ignored if no primary key exists.
The return_record_ids option indicates that the database should return the unique identifiers of inserted records.
Parameters
- table_name (str) –
- Table to which the records are to be added. Must be an existing table.
- data (list of Records) –
- An array of binary or json encoded data, or
Record
objects for the records to be added. The user can provide a single element (which will be automatically promoted to a list internally) or a list. The user can provide a single element (which will be automatically promoted to a list internally) or a list. - list_encoding (str) –
The encoding of the records to be inserted. Allowed values are:
- binary
- json
The default value is ‘binary’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
update_on_existing_pk – Specifies the record collision policy for inserting into a table with a primary key. If set to true, any existing table record with primary key values that match those of a record being inserted will be replaced by that new record. If set to false, any existing table record with primary key values that match those of a record being inserted will remain unchanged and the new record discarded. If the specified table does not have a primary key, then this option is ignored. Allowed values are:
- true
- false
The default value is ‘false’.
return_record_ids – If true then return the internal record id along for each inserted record. Allowed values are:
- true
- false
The default value is ‘false’.
truncate_strings – If set to true, any strings which are too long for their target charN string columns will be truncated to fit. Allowed values are:
- true
- false
The default value is ‘false’.
return_individual_errors – If set to true, success will always be returned, and any errors found will be included in the info map. The “bad_record_indices” entry is a comma-separated list of bad records (0-based). And if so, there will also be an “error_N” entry for each record with an error, where N is the index (0-based). Allowed values are:
- true
- false
The default value is ‘false’.
allow_partial_batch – If set to true, all correct records will be inserted and incorrect records will be rejected and reported. Otherwise, the entire batch will be rejected if any records are incorrect. Allowed values are:
- true
- false
The default value is ‘false’.
dry_run – If set to true, no data will be saved and any errors will be returned. Allowed values are:
- true
- false
The default value is ‘false’.
- record_type (RecordType) –
- A
RecordType
object using which the the binary data will be encoded. If None, then it is assumed that the data is already encoded, and no further encoding will occur. Default is None.
Returns
A dict with the following entries–
- record_ids (list of str) –
- An array containing the IDs with which the added records are identified internally.
- count_inserted (int) –
- The number of records inserted.
- count_updated (int) –
- The number of records updated.
- info (dict of str to str) –
Additional information. Allowed keys are:
- bad_record_indices – If return_individual_errors option is specified or implied, returns a comma-separated list of invalid indices (0-based)
- error_N – Error message for record at index N (0-based)
-
GPUdb.
insert_records_from_files
(table_name=None, filepaths=None, create_table_options={}, options={})[source]¶ Reads from one or more files located on the server and inserts the data into a new or existing table.
For CSV files, there are two loading schemes: positional and name-based. The name-based loading scheme is enabled when the file has a header present and text_has_header is set to true. In this scheme, the source file(s) field names must match the target table’s column names exactly; however, the source file can have more fields than the target table has columns. If error_handling is set to permissive, the source file can have fewer fields than the target table has columns. If the name-based loading scheme is being used, names matching the file header’s names may be provided to columns_to_load instead of numbers, but ranges are not supported.
Returns once all files are processed.
Parameters
- table_name (str) –
- Name of the table into which the data will be inserted. If the table does not exist, the table will be created using either an existing type_id or the type inferred from the file.
- filepaths (list of str) –
- Absolute or relative filepath(s) from where files will be loaded. Relative filepaths are relative to the defined external_files_directory parameter in the server configuration. The filepaths may include wildcards (*). If the first path ends in .tsv, the text delimiter will be defaulted to a tab character. If the first path ends in .psv, the text delimiter will be defaulted to a pipe character (|). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- create_table_options (dict of str to str) –
Options used when creating a new table. The default value is an empty dict ( {} ). Allowed keys are:
type_id – ID of a currently registered type. The default value is ‘’.
no_error_if_exists – If true, prevents an error from occurring if the table already exists and is of the given type. If a table with the same ID but a different type exists, it is still an error. Allowed values are:
- true
- false
The default value is ‘false’.
collection_name – Name of a collection which is to contain the newly created table. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created table will be a top-level table.
is_replicated – For a table, affects the distribution scheme for the table’s data. If true and the given type has no explicit shard key defined, the table will be replicated. If false, the table will be sharded according to the shard key specified in the given type_id, or randomly sharded, if no shard key is specified. Note that a type containing a shard key cannot be used to create a replicated table. Allowed values are:
- true
- false
The default value is ‘false’.
foreign_keys – Semicolon-separated list of foreign keys, of the format ‘(source_column_name [, ...]) references target_table_name(primary_key_column_name [, ...]) [as foreign_key_name]’.
foreign_shard_key – Foreign shard key of the format ‘source_column references shard_by_column from target_table(primary_key_column)’.
partition_type – Partitioning scheme to use. Allowed values are:
- RANGE – Use range partitioning.
- INTERVAL – Use interval partitioning.
- LIST – Use list partitioning.
- HASH – Use hash partitioning.
partition_keys – Comma-separated list of partition keys, which are the columns or column expressions by which records will be assigned to partitions defined by partition_definitions.
partition_definitions – Comma-separated list of partition definitions, whose format depends on the choice of partition_type. See range partitioning, interval partitioning, list partitioning, or hash partitioning for example formats.
is_automatic_partition – If true, a new partition will be created for values which don’t fall into an existing partition. Currently only supported for list partitions. Allowed values are:
- true
- false
The default value is ‘false’.
ttl – For a table, sets the TTL of the table specified in input parameter table_name.
chunk_size – Indicates the number of records per chunk to be used for this table.
is_result_table – For a table, indicates whether the table is an in-memory table. A result table cannot contain store_only, text_search, or string columns (charN columns are acceptable), and it will not be retained if the server is restarted. Allowed values are:
- true
- false
The default value is ‘false’.
strategy_definition – The tier strategy for the table and its columns. See tier strategy usage for format and tier strategy examples for examples.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
batch_size – Specifies number of records to process before inserting.
column_formats – For each target column specified, applies the column-property-bound format to the source data loaded into that column. Each column format will contain a mapping of one or more of its column properties to an appropriate format for each property. Currently supported column properties include date, time, & datetime. The parameter value must be formatted as a JSON string of maps of column names to maps of column properties to their corresponding column formats, e.g., { “order_date” : { “date” : “%Y.%m.%d” }, “order_time” : { “time” : “%H:%M:%S” } }. See default_column_formats for valid format syntax.
columns_to_load – For delimited_text file_type only. Specifies a comma-delimited list of column positions or names to load instead of loading all columns in the file(s); if more than one file is being loaded, the list of columns will apply to all files. Column numbers can be specified discretely or as a range, e.g., a value of ‘5,7,1..3’ will create a table with the first column in the table being the fifth column in the file, followed by seventh column in the file, then the first column through the fourth column in the file.
default_column_formats – Specifies the default format to be applied to source data loaded into columns with the corresponding column property. This default column-property-bound format can be overridden by specifying a column property & format for a given target column in column_formats. For each specified annotation, the format will apply to all columns with that annotation unless a custom column_formats for that annotation is specified. The parameter value must be formatted as a JSON string that is a map of column properties to their respective column formats, e.g., { “date” : “%Y.%m.%d”, “time” : “%H:%M:%S” }. Column formats are specified as a string of control characters and plain text. The supported control characters are ‘Y’, ‘m’, ‘d’, ‘H’, ‘M’, ‘S’, and ‘s’, which follow the Linux ‘strptime()’ specification, as well as ‘s’, which specifies seconds and fractional seconds (though the fractional component will be truncated past milliseconds). Formats for the ‘date’ annotation must include the ‘Y’, ‘m’, and ‘d’ control characters. Formats for the ‘time’ annotation must include the ‘H’, ‘M’, and either ‘S’ or ‘s’ (but not both) control characters. Formats for the ‘datetime’ annotation meet both the ‘date’ and ‘time’ control character requirements. For example, ‘{“datetime” : “%m/%d/%Y %H:%M:%S” }’ would be used to interpret text as “05/04/2000 12:12:11”
dry_run – If set to true, no data will be inserted but the file will be read with the applied error_handling mode and the number of valid records that would be normally inserted are returned. Allowed values are:
- false
- true
The default value is ‘false’.
error_handling – Specifies how errors should be handled upon insertion. Allowed values are:
- permissive – Records with missing columns are populated with nulls if possible; otherwise, the malformed records are skipped.
- ignore_bad_records – Malformed records are skipped.
- abort – Stops current insertion and aborts entire operation when an error is encountered.
The default value is ‘Permissive’.
file_type – File type for the file(s). Allowed values are:
- delimited_text – Indicates the file(s) are in delimited text format, e.g., CSV, TSV, PSV, etc.
The default value is ‘delimited_text’.
loading_mode – Specifies how to divide data loading among nodes. Allowed values are:
- head – The head node loads all data. All files must be available on the head node.
- distributed_shared – The worker nodes coordinate loading a set of files that are available to all of them. All files must be available on all nodes. This option is best when there is a shared file system.
- distributed_local – Each worker node loads all files that are available to it. This option is best when each worker node has its own file system.
The default value is ‘head’.
text_comment_string – For delimited_text file_type only. All lines in the file(s) starting with the provided string are ignored. The comment string has no effect unless it appears at the beginning of a line. The default value is ‘#’.
text_delimiter – For delimited_text file_type only. Specifies the delimiter for values and columns in the header row (if present). Must be a single character. The default value is ‘,’.
text_escape_character – For delimited_text file_type only. The character used in the file(s) to escape certain character sequences in text. For example, the escape character followed by a literal ‘n’ escapes to a newline character within the field. Can be used within quoted string to escape a quote character. An empty value for this option does not specify an escape character.
text_has_header – For delimited_text file_type only. Indicates whether the delimited text files have a header row. Allowed values are:
- true
- false
The default value is ‘true’.
text_header_property_delimiter – For delimited_text file_type only. Specifies the delimiter for column properties in the header row (if present). Cannot be set to same value as text_delimiter. The default value is ‘|’.
text_null_string – For delimited_text file_type only. The value in the file(s) to treat as a null value in the database. The default value is ‘’.
text_quote_character – For delimited_text file_type only. The quote character used in the file(s), typically encompassing a field value. The character must appear at beginning and end of field to take effect. Delimiters within quoted fields are not treated as delimiters. Within a quoted field, double quotes (”) can be used to escape a single literal quote character. To not have a quote character, specify an empty string (“”). The default value is ‘”’.
truncate_table – If set to true, truncates the table specified by input parameter table_name prior to loading the file(s). Allowed values are:
- true
- false
The default value is ‘false’.
num_tasks_per_rank – Optional: number of tasks for reading file per rank. Default will be external_file_reader_num_tasks
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
- type_id (str) –
- Type ID for the table.
- count_inserted (long) –
- Number of records inserted.
- count_skipped (long) –
- Number of records skipped when not running in abort error handling mode.
- count_updated (long) –
- Number of records updated. The default value is -1.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
insert_records_random
(table_name=None, count=None, options={})[source]¶ Generates a specified number of random records and adds them to the given table. There is an optional parameter that allows the user to customize the ranges of the column values. It also allows the user to specify linear profiles for some or all columns in which case linear values are generated rather than random ones. Only individual tables are supported for this operation.
This operation is synchronous, meaning that a response will not be returned until all random records are fully available.
Parameters
- table_name (str) –
- Table to which random records will be added. Must be an existing table. Also, must be an individual table, not a collection of tables, nor a view of a table.
- count (long) –
- Number of records to generate.
- options (dict of str to dicts of str to floats) –
Optional parameter to pass in specifications for the randomness of the values. This map is different from the options parameter of most other endpoints in that it is a map of string to map of string to doubles, while most others are maps of string to string. In this map, the top level keys represent which column’s parameters are being specified, while the internal keys represents which parameter is being specified. These parameters take on different meanings depending on the type of the column. Below follows a more detailed description of the map:. The default value is an empty dict ( {} ). Allowed keys are:
- seed –
If provided, the internal random number generator will be
initialized with the given value. The minimum is 0. This
allows for the same set of random numbers to be generated
across invocation of this endpoint in case the user wants to
repeat the test. Since input parameter options, is a map
of maps, we need an internal map to provide the seed value.
For example, to pass 100 as the seed value through this
parameter, you need something equivalent to: ‘options’ =
{‘seed’: { ‘value’: 100 } }
Allowed keys are:
- value – Pass the seed value here.
- all –
This key indicates that the specifications relayed in the
internal map are to be applied to all columns of the records.
Allowed keys are:
- min – For numerical columns, the minimum of the generated values is set to this value. Default is -99999. For point, shape, and track columns, min for numeric ‘x’ and ‘y’ columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are -180.0 and -90.0. For the ‘TIMESTAMP’ column, the default minimum corresponds to Jan 1, 2010. For string columns, the minimum length of the randomly generated strings is set to this value (default is 0). If both minimum and maximum are provided, minimum must be less than or equal to max. Value needs to be within [0, 200]. If the min is outside the accepted ranges for strings columns and ‘x’ and ‘y’ columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use the all parameter judiciously.
- max – For numerical columns, the maximum of the generated values is set to this value. Default is 99999. For point, shape, and track columns, max for numeric ‘x’ and ‘y’ columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are 180.0 and 90.0. For string columns, the maximum length of the randomly generated strings is set to this value (default is 200). If both minimum and maximum are provided, max must be greater than or equal to min. Value needs to be within [0, 200]. If the max is outside the accepted ranges for strings columns and ‘x’ and ‘y’ columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use the all parameter judiciously.
- interval – If specified, generate values for all columns evenly spaced with the given interval value. If a max value is specified for a given column the data is randomly generated between min and max and decimated down to the interval. If no max is provided the data is linerally generated starting at the minimum value (instead of generating random data). For non-decimated string-type columns the interval value is ignored. Instead the values are generated following the pattern: ‘attrname_creationIndex#’, i.e. the column name suffixed with an underscore and a running counter (starting at 0). For string types with limited size (eg char4) the prefix is dropped. No nulls will be generated for nullable columns.
- null_percentage – If specified, then generate the given percentage of the count as nulls for all nullable columns. This option will be ignored for non-nullable columns. The value must be within the range [0, 1.0]. The default value is 5% (0.05).
- cardinality – If specified, limit the randomly generated values to a fixed set. Not allowed on a column with interval specified, and is not applicable to WKT or Track-specific columns. The value must be greater than 0. This option is disabled by default.
- attr_name –
Use the desired column name in place of attr_name, and set
the following parameters for the column specified. This
overrides any parameter set by all.
Allowed keys are:
- min – For numerical columns, the minimum of the generated values is set to this value. Default is -99999. For point, shape, and track columns, min for numeric ‘x’ and ‘y’ columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are -180.0 and -90.0. For the ‘TIMESTAMP’ column, the default minimum corresponds to Jan 1, 2010. For string columns, the minimum length of the randomly generated strings is set to this value (default is 0). If both minimum and maximum are provided, minimum must be less than or equal to max. Value needs to be within [0, 200]. If the min is outside the accepted ranges for strings columns and ‘x’ and ‘y’ columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use the all parameter judiciously.
- max – For numerical columns, the maximum of the generated values is set to this value. Default is 99999. For point, shape, and track columns, max for numeric ‘x’ and ‘y’ columns needs to be within [-180, 180] and [-90, 90], respectively. The default minimum possible values for these columns in such cases are 180.0 and 90.0. For string columns, the maximum length of the randomly generated strings is set to this value (default is 200). If both minimum and maximum are provided, max must be greater than or equal to min. Value needs to be within [0, 200]. If the max is outside the accepted ranges for strings columns and ‘x’ and ‘y’ columns for point/shape/track, then those parameters will not be set; however, an error will not be thrown in such a case. It is the responsibility of the user to use the all parameter judiciously.
- interval – If specified, generate values for all columns evenly spaced with the given interval value. If a max value is specified for a given column the data is randomly generated between min and max and decimated down to the interval. If no max is provided the data is linerally generated starting at the minimum value (instead of generating random data). For non-decimated string-type columns the interval value is ignored. Instead the values are generated following the pattern: ‘attrname_creationIndex#’, i.e. the column name suffixed with an underscore and a running counter (starting at 0). For string types with limited size (eg char4) the prefix is dropped. No nulls will be generated for nullable columns.
- null_percentage – If specified and if this column is nullable, then generate the given percentage of the count as nulls. This option will result in an error if the column is not nullable. The value must be within the range [0, 1.0]. The default value is 5% (0.05).
- cardinality – If specified, limit the randomly generated values to a fixed set. Not allowed on a column with interval specified, and is not applicable to WKT or Track-specific columns. The value must be greater than 0. This option is disabled by default.
- track_length –
This key-map pair is only valid for track data sets (an error
is thrown otherwise). No nulls would be generated for
nullable columns.
Allowed keys are:
- min – Minimum possible length for generated series; default is 100 records per series. Must be an integral value within the range [1, 500]. If both min and max are specified, min must be less than or equal to max.
- max – Maximum possible length for generated series; default is 500 records per series. Must be an integral value within the range [1, 500]. If both min and max are specified, max must be greater than or equal to min.
- seed –
If provided, the internal random number generator will be
initialized with the given value. The minimum is 0. This
allows for the same set of random numbers to be generated
across invocation of this endpoint in case the user wants to
repeat the test. Since input parameter options, is a map
of maps, we need an internal map to provide the seed value.
For example, to pass 100 as the seed value through this
parameter, you need something equivalent to: ‘options’ =
{‘seed’: { ‘value’: 100 } }
Allowed keys are:
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
- count (long) –
- Number of records inserted.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
insert_symbol
(symbol_id=None, symbol_format=None, symbol_data=None, options={})[source]¶ Adds a symbol or icon (i.e. an image) to represent data points when data is rendered visually. Users must provide the symbol identifier (string), a format (currently supported: ‘svg’ and ‘svg_path’), the data for the symbol, and any additional optional parameter (e.g. color). To have a symbol used for rendering create a table with a string column named ‘SYMBOLCODE’ (along with ‘x’ or ‘y’ for example). Then when the table is rendered (via WMS) if the ‘dosymbology’ parameter is ‘true’ then the value of the ‘SYMBOLCODE’ column is used to pick the symbol displayed for each point.
Parameters
- symbol_id (str) –
- The id of the symbol being added. This is the same id that should be in the ‘SYMBOLCODE’ column for objects using this symbol
- symbol_format (str) –
Specifies the symbol format. Must be either ‘svg’ or ‘svg_path’. Allowed values are:
- svg
- svg_path
- symbol_data (str) –
- The actual symbol data. If input parameter symbol_format is ‘svg’ then this should be the raw bytes representing an svg file. If input parameter symbol_format is svg path then this should be an svg path string, for example: ‘M25.979,12.896,5.979,12.896,5.979,19.562,25.979,19.562z’
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- color – If input parameter symbol_format is ‘svg’ this is ignored. If input parameter symbol_format is ‘svg_path’ then this option specifies the color (in RRGGBB hex format) of the path. For example, to have the path rendered in red, used ‘FF0000’. If ‘color’ is not provided then ‘00FF00’ (i.e. green) is used by default.
Returns
A dict with the following entries–
- symbol_id (str) –
- Value of input parameter symbol_id.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
kill_proc
(run_id='', options={})[source]¶ Kills a running proc instance.
Parameters
- run_id (str) –
- The run ID of a running proc instance. If a proc with a matching run ID is not found or the proc instance has already completed, no procs will be killed. If not specified, all running proc instances will be killed. The default value is ‘’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- run_tag –
If input parameter run_id is specified, kill the proc
instance that has a matching run ID and a matching run tag
that was provided to
execute_proc()
. If input parameter run_id is not specified, kill the proc instance(s) where a matching run tag was provided toexecute_proc()
. The default value is ‘’.
- run_tag –
If input parameter run_id is specified, kill the proc
instance that has a matching run ID and a matching run tag
that was provided to
Returns
A dict with the following entries–
- run_ids (list of str) –
- List of run IDs of proc instances that were killed.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
lock_table
(table_name=None, lock_type='status', options={})[source]¶ Manages global access to a table’s data. By default a table has a input parameter lock_type of read_write, indicating all operations are permitted. A user may request a read_only or a write_only lock, after which only read or write operations, respectively, are permitted on the table until the lock is removed. When input parameter lock_type is no_access then no operations are permitted on the table. The lock status can be queried by setting input parameter lock_type to status.
Parameters
- table_name (str) –
- Name of the table to be locked. It must be a currently existing table, collection, or view.
- lock_type (str) –
The type of lock being applied to the table. Setting it to status will return the current lock status of the table without changing it. Allowed values are:
- status – Show locked status
- no_access – Allow no read/write operations
- read_only – Allow only read operations
- write_only – Allow only write operations
- read_write – Allow all read/write operations
The default value is ‘status’.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- lock_type (str) –
- Returns the lock state of the table.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
match_graph
(graph_name=None, sample_points=None, solve_method='markov_chain', solution_table='', options={})[source]¶ Matches a directed route implied by a given set of latitude/longitude points to an existing underlying road network graph using a given solution type.
IMPORTANT: It’s highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /match/graph examples before using this endpoint.
Parameters
- graph_name (str) –
- Name of the underlying geospatial graph resource to match to using input parameter sample_points.
- sample_points (list of str) –
- Sample points used to match to an underlying geospatial graph. Sample points must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with: existing column names, e.g., ‘table.column AS SAMPLE_X’; expressions, e.g., ‘ST_MAKEPOINT(table.x, table.y) AS SAMPLE_WKTPOINT’; or constant values, e.g., ‘{1, 2, 10} AS SAMPLE_TRIPID’. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- solve_method (str) –
The type of solver to use for graph matching. Allowed values are:
- markov_chain – Matches input parameter sample_points to the graph using the Hidden Markov Model (HMM)-based method, which conducts a range-tree closest-edge search to find the best combinations of possible road segments (num_segments) for each sample point to create the best route. The route is secured one point at a time while looking ahead chain_width number of points, so the prediction is corrected after each point. This solution type is the most accurate but also the most computationally intensive. Related options: num_segments and chain_width.
- match_od_pairs – Matches input parameter sample_points to find the most probable path between origin and destination pairs with cost constraints.
- match_supply_demand – Matches input parameter sample_points to optimize scheduling multiple supplies (trucks) with varying sizes to varying demand sites with varying capacities per depot. Related options: partial_loading and max_combinations.
- match_batch_solves – Matches input parameter sample_points source and destination pairs for the shortest path solves in batch mode.
The default value is ‘markov_chain’.
- solution_table (str) –
- The name of the table used to store the results; this table contains a track of geospatial points for the matched portion of the graph, a track ID, and a score value. Also outputs a details table containing a trip ID (that matches the track ID), the latitude/longitude pair, the timestamp the point was recorded at, and an edge ID corresponding to the matched road segment. Has the same naming restrictions as tables. Must not be an existing table of the same name. The default value is ‘’.
- options (dict of str to str) –
Additional parameters. The default value is an empty dict ( {} ). Allowed keys are:
gps_noise – GPS noise value (in meters) to remove redundant sample points. Use -1 to disable noise reduction. The default value accounts for 95% of point variation (+ or -5 meters). The default value is ‘5.0’.
num_segments – Maximum number of potentially matching road segments for each sample point. For the markov_chain solver, the default is 3. The default value is ‘3’.
search_radius – Maximum search radius used when snapping sample points onto potentially matching surrounding segments. The default value corresponds to approximately 100 meters. The default value is ‘0.001’.
chain_width – For the markov_chain solver only. Length of the sample points lookahead window within the Markov kernel; the larger the number, the more accurate the solution. The default value is ‘9’.
source – Optional WKT starting point from input parameter sample_points for the solver. The default behavior for the endpoint is to use time to determine the starting point. The default value is ‘POINT NULL’.
destination – Optional WKT ending point from input parameter sample_points for the solver. The default behavior for the endpoint is to use time to determine the destination point. The default value is ‘POINT NULL’.
partial_loading – For the match_supply_demand solver only. When false (non-default), trucks do not off-load at the demand (store) side if the remainder is less than the store’s need. Allowed values are:
- true – Partial off-loading at multiple store (demand) locations
- false – No partial off-loading allowed if supply is less than the store’s demand.
The default value is ‘true’.
max_combinations – For the match_supply_demand solver only. This is the cutoff for the number of generated combinations for sequencing the demand locations - can increase this up to 2M. The default value is ‘10000’.
left_turn_penalty – This will add an additonal weight over the edges labelled as ‘left turn’ if the ‘add_turn’ option parameter of the
create_graph()
was invoked at graph creation. The default value is ‘0.0’.right_turn_penalty – This will add an additonal weight over the edges labelled as’ right turn’ if the ‘add_turn’ option parameter of the
create_graph()
was invoked at graph creation. The default value is ‘0.0’.intersection_penalty – This will add an additonal weight over the edges labelled as ‘intersection’ if the ‘add_turn’ option parameter of the
create_graph()
was invoked at graph creation. The default value is ‘0.0’.sharp_turn_penalty – This will add an additonal weight over the edges labelled as ‘sharp turn’ or ‘u-turn’ if the ‘add_turn’ option parameter of the
create_graph()
was invoked at graph creation. The default value is ‘0.0’.aggregated_output – For the match_supply_demand solver only. When it is true (default), each record in the output table shows a particular truck’s scheduled cumulative round trip path (MULTILINESTRING) and the corresponding aggregated cost. Otherwise, each record shows a single scheduled truck route (LINESTRING) towards a particular demand location (store id) with its corresponding cost. The default value is ‘true’.
max_trip_cost – For the match_supply_demand solver only. If this constraint is greater than zero (default) then the trucks will skip travelling from one demand location to another if the cost between them is greater than this number (distance or time). Zero (default) value means no check is performed. The default value is ‘0.0’.
filter_folding_paths – For the markov_chain solver only. When true (non-default), the paths per sequence combination is checked for folding over patterns and can significantly increase the execution time depending on the chain width and the number of gps samples. Allowed values are:
- true – Filter out the folded paths.
- false – Do not filter out the folded paths
The default value is ‘false’.
unit_unloading_cost – For the match_supply_demand solver only. The unit cost per load amount to be delivered. If this value is greater than zero (default) then the additional cost of this unit load multiplied by the total dropped load will be added over to the trip cost to the demand location. The default value is ‘0.0’.
max_num_threads – For the markov_chain solver only. If specified (greater than zero), the maximum number of threads will not be greater than the specified value. It can be lower due to the memory and the number cores available. Default value of zero allows the algorithm to set the maximal number of threads within these constraints. The default value is ‘0’.
truck_service_limit – For the match_supply_demand solver only. If specified (greather than zero), any truck’s total service cost (distance or time) will be limited by the specified value including multiple rounds (if set). The default value is ‘0.0’.
enable_truck_reuse – For the match_supply_demand solver only. If specified (true), all trucks can be scheduled for second rounds from their originating depots. Allowed values are:
- true – Allows reusing trucks for scheduling again.
- false – Trucks are scheduled only once from their depots.
The default value is ‘false’.
Returns
A dict with the following entries–
- result (bool) –
- Indicates a successful solution.
- match_score (float) –
- The mean square error calculation representing the map matching score. Values closer to zero are better.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
merge_records
(table_name=None, source_table_names=None, field_maps=None, options={})[source]¶ Create a new empty result table (specified by input parameter table_name), and insert all records from source tables (specified by input parameter source_table_names) based on the field mapping information (specified by input parameter field_maps).
For merge records details and examples, see Merge Records. For limitations, see Merge Records Limitations and Cautions.
The field map (specified by input parameter field_maps) holds the user-specified maps of target table column names to source table columns. The array of input parameter field_maps must match one-to-one with the input parameter source_table_names, e.g., there’s a map present in input parameter field_maps for each table listed in input parameter source_table_names.
Parameters
- table_name (str) –
- The new result table name for the records to be merged. Must NOT be an existing table.
- source_table_names (list of str) –
- The list of source table names to get the records from. Must be existing table names. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- field_maps (list of dicts of str to str) –
- Contains a list of source/target column mappings, one mapping for each source table listed in input parameter source_table_names being merged into the target table specified by input parameter table_name. Each mapping contains the target column names (as keys) that the data in the mapped source columns or column expressions (as values) will be merged into. All of the source columns being merged into a given target column must match in type, as that type will determine the type of the new target column. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
collection_name – Name of a collection which is to contain the newly created merged table specified by input parameter table_name. If the collection provided is non-existent, the collection will be automatically created. If empty, then the newly created merged table will be a top-level table.
is_replicated – Indicates the distribution scheme for the data of the merged table specified in input parameter table_name. If true, the table will be replicated. If false, the table will be randomly sharded. Allowed values are:
- true
- false
The default value is ‘false’.
ttl – Sets the TTL of the merged table specified in input parameter table_name.
persist – If true, then the table specified in input parameter table_name will be persisted and will not expire unless a ttl is specified. If false, then the table will be an in-memory table and will expire unless a ttl is specified otherwise. Allowed values are:
- true
- false
The default value is ‘true’.
chunk_size – Indicates the number of records per chunk to be used for the merged table specified in input parameter table_name.
view_id – view this result table is part of. The default value is ‘’.
Returns
A dict with the following entries–
table_name (str)
- info (dict of str to str) –
- Additional information.
-
GPUdb.
modify_graph
(graph_name=None, nodes=None, edges=None, weights=None, restrictions=None, options={})[source]¶ Update an existing graph network using given nodes, edges, weights, restrictions, and options.
IMPORTANT: It’s highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some graph examples before using this endpoint.
Parameters
- graph_name (str) –
- Name of the graph resource to modify.
- nodes (list of str) –
- Nodes with which to update existing input parameter nodes in graph specified by input parameter graph_name. Review Nodes for more information. Nodes must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS NODE_ID’, expressions, e.g., ‘ST_MAKEPOINT(column1, column2) AS NODE_WKTPOINT’, or raw values, e.g., ‘{9, 10, 11} AS NODE_ID’. If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- edges (list of str) –
- Edges with which to update existing input parameter edges in graph specified by input parameter graph_name. Review Edges for more information. Edges must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS EDGE_ID’, expressions, e.g., ‘SUBSTR(column, 1, 6) AS EDGE_NODE1_NAME’, or raw values, e.g., “{‘family’, ‘coworker’} AS EDGE_LABEL”. If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- weights (list of str) –
- Weights with which to update existing input parameter weights in graph specified by input parameter graph_name. Review Weights for more information. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS WEIGHTS_EDGE_ID’, expressions, e.g., ‘ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED’, or raw values, e.g., ‘{4, 15} AS WEIGHTS_VALUESPECIFIED’. If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- restrictions (list of str) –
- Restrictions with which to update existing input parameter restrictions in graph specified by input parameter graph_name. Review Restrictions for more information. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS RESTRICTIONS_EDGE_ID’, expressions, e.g., ‘column/2 AS RESTRICTIONS_VALUECOMPARED’, or raw values, e.g., ‘{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED’. If using raw values in an identifier combination, the number of values specified must match across the combination. Identifier combination(s) do not have to match the method used to create the graph, e.g., if column names were specified to create the graph, expressions or raw values could also be used to modify the graph. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
restriction_threshold_value – Value-based restriction comparison. Any node or edge with a RESTRICTIONS_VALUECOMPARED value greater than the restriction_threshold_value will not be included in the graph.
export_create_results – If set to true, returns the graph topology in the response as arrays. Allowed values are:
- true
- false
The default value is ‘false’.
enable_graph_draw – If set to true, adds a ‘EDGE_WKTLINE’ column identifier to the specified graph_table so the graph can be viewed via WMS; for social and non-geospatial graphs, the ‘EDGE_WKTLINE’ column identifier will be populated with spatial coordinates derived from a flattening layout algorithm so the graph can still be viewed. Allowed values are:
- true
- false
The default value is ‘false’.
save_persist – If set to true, the graph will be saved in the persist directory (see the config reference for more information). If set to false, the graph will be removed when the graph server is shutdown. Allowed values are:
- true
- false
The default value is ‘false’.
add_table_monitor – Adds a table monitor to every table used in the creation of the graph; this table monitor will trigger the graph to update dynamically upon inserts to the source table(s). Note that upon database restart, if save_persist is also set to true, the graph will be fully reconstructed and the table monitors will be reattached. For more details on table monitors, see
create_table_monitor()
. Allowed values are:- true
- false
The default value is ‘false’.
graph_table – If specified, the created graph is also created as a table with the given name and following identifier columns: ‘EDGE_ID’, ‘EDGE_NODE1_ID’, ‘EDGE_NODE2_ID’. If left blank, no table is created. The default value is ‘’.
remove_label_only – When RESTRICTIONS on labeled entities requested, if set to true this will NOT delete the entity but only the label associated with the entity. Otherwise (default), it’ll delete the label AND the entity. Allowed values are:
- true
- false
The default value is ‘false’.
add_turns – Adds dummy ‘pillowed’ edges around intersection nodes where there are more than three edges so that additional weight penalties can be imposed by the solve endpoints. (increases the total number of edges). Allowed values are:
- true
- false
The default value is ‘false’.
turn_angle – Value in degrees modifies the thresholds for attributing right, left, sharp turns, and intersections. It is the vertical deviation angle from the incoming edge to the intersection node. The larger the value, the larger the threshold for sharp turns and intersections; the smaller the value, the larger the threshold for right and left turns; 0 < turn_angle < 90. The default value is ‘60’.
Returns
A dict with the following entries–
- num_nodes (long) –
- Total number of nodes in the graph.
- num_edges (long) –
- Total number of edges in the graph.
- edges_ids (list of longs) –
- Edges given as pairs of node indices. Only populated if export_create_results is set to true.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
query_graph
(graph_name=None, queries=None, restrictions=[], adjacency_table='', rings=1, options={})[source]¶ Employs a topological query on a network graph generated a-priori by
create_graph()
and returns a list of adjacent edge(s) or node(s), also known as an adjacency list, depending on what’s been provided to the endpoint; providing edges will return nodes and providing nodes will return edges.To determine the node(s) or edge(s) adjacent to a value from a given column, provide a list of values to input parameter queries. This field can be populated with column values from any table as long as the type is supported by the given identifier. See Query Identifiers for more information.
To return the adjacency list in the response, leave input parameter adjacency_table empty. To return the adjacency list in a table and not in the response, provide a value to input parameter adjacency_table and set export_query_results to false. To return the adjacency list both in a table and the response, provide a value to input parameter adjacency_table and set export_query_results to true.
IMPORTANT: It’s highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /query/graph examples before using this endpoint.
Parameters
- graph_name (str) –
- Name of the graph resource to query.
- queries (list of str) –
- Nodes or edges to be queried specified using query identifiers. Identifiers can be used with existing column names, e.g., ‘table.column AS QUERY_NODE_ID’, raw values, e.g., ‘{0, 2} AS QUERY_NODE_ID’, or expressions, e.g., ‘ST_MAKEPOINT(table.x, table.y) AS QUERY_NODE_WKTPOINT’. Multiple values can be provided as long as the same identifier is used for all values. If using raw values in an identifier combination, the number of values specified must match across the combination. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- restrictions (list of str) –
- Additional restrictions to apply to the nodes/edges of an existing graph. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS RESTRICTIONS_EDGE_ID’, expressions, e.g., ‘column/2 AS RESTRICTIONS_VALUECOMPARED’, or raw values, e.g., ‘{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED’. If using raw values in an identifier combination, the number of values specified must match across the combination. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- adjacency_table (str) –
Name of the table to store the resulting adjacencies. If left blank, the query results are instead returned in the response even if export_query_results is set to false. If the ‘QUERY_TARGET_NODE_LABEL’ query identifier is used in input parameter queries, then two additional columns will be available: ‘PATH_ID’ and ‘RING_ID’. See
`Using Labels<../../../graph_solver/network_graph_solver.html#using-labels>`_ for more information. The default value is ‘’.
- rings (int) –
- Sets the number of rings around the node to query for adjacency, with ‘1’ being the edges directly attached to the queried node. Also known as number of hops. For example, if it is set to ‘2’, the edge(s) directly attached to the queried node(s) will be returned; in addition, the edge(s) attached to the node(s) attached to the initial ring of edge(s) surrounding the queried node(s) will be returned. If the value is set to ‘0’, any nodes that meet the criteria in input parameter queries and input parameter restrictions will be returned. This parameter is only applicable when querying nodes. The default value is 1.
- options (dict of str to str) –
Additional parameters. The default value is an empty dict ( {} ). Allowed keys are:
force_undirected – If set to true, all inbound edges and outbound edges relative to the node will be returned. If set to false, only outbound edges relative to the node will be returned. This parameter is only applicable if the queried graph input parameter graph_name is directed and when querying nodes. Consult Directed Graphs for more details. Allowed values are:
- true
- false
The default value is ‘false’.
limit – When specified, limits the number of query results. Note that if the target_nodes_table is provided, the size of the corresponding table will be limited by the limit value. The default value is an empty dict ( {} ).
target_nodes_table – Name of the table to store the list of the final nodes reached during the traversal. If this value is left as the default, the table name will default to the input parameter adjacency_table value plus a ‘_nodes’ suffix, e.g., ‘<adjacency_table_name>_nodes’. The default value is ‘’.
restriction_threshold_value – Value-based restriction comparison. Any node or edge with a RESTRICTIONS_VALUECOMPARED value greater than the restriction_threshold_value will not be included in the solution.
export_query_results – Returns query results in the response. If set to true, the output parameter adjacency_list_int_array (if the query was based on IDs), output parameter adjacency_list_string_array (if the query was based on names), or output parameter adjacency_list_wkt_array (if the query was based on WKTs) will be populated with the results. If set to false, none of the arrays will be populated. Allowed values are:
- true
- false
The default value is ‘false’.
enable_graph_draw – If set to true, adds a WKT-type column named ‘QUERY_EDGE_WKTLINE’ to the given input parameter adjacency_table and inputs WKT values from the source graph (if available) or auto-generated WKT values (if there are no WKT values in the source graph). A subsequent call to the /wms endpoint can then be made to display the query results on a map. Allowed values are:
- true
- false
The default value is ‘false’.
and_labels – If set to true, the result of the query has entities that satisfy all of the target labels, instead of any. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- result (bool) –
- Indicates a successful query.
- adjacency_list_int_array (list of longs) –
- The adjacency entity integer ID: either edge IDs per node requested (if using QUERY_EDGE_ID or QUERY_NODE1_ID and QUERY_NODE2_ID in the input) or two node IDs per edge requested (if using QUERY_NODE_ID in the input).
- adjacency_list_string_array (list of str) –
- The adjacency entity string ID: either edge IDs per node requested (if using QUERY_EDGE_NAME or QUERY_NODE1_NAME and QUERY_NODE2_NAME in the input) or two node IDs per edge requested (if using QUERY_NODE_NAME in the input).
- adjacency_list_wkt_array (list of str) –
- The adjacency entity WKTPOINT or WKTLINE ID: either edge IDs per node requested (if using QUERY_EDGE_WKTLINE or QUERY_NODE1_WKTPOINT and QUERY_NODE2_WKTPOINT in the input) or two node IDs per edge requested (if using QUERY_NODE_WKTPOINT in the input).
- info (dict of str to str) –
- Additional information.
-
GPUdb.
revoke_permission_proc
(name=None, permission=None, proc_name=None, options={})[source]¶ Revokes a proc-level permission from a user or role.
Parameters
- name (str) –
- Name of the user or role from which the permission will be revoked. Must be an existing user or role.
- permission (str) –
Permission to revoke from the user or role. Allowed values are:
- proc_execute – Execute access to the proc.
- proc_name (str) –
- Name of the proc to which the permission grants access. Must be an existing proc, or an empty string if the permission grants access to all procs.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- permission (str) –
- Value of input parameter permission.
- proc_name (str) –
- Value of input parameter proc_name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
revoke_permission_system
(name=None, permission=None, options={})[source]¶ Revokes a system-level permission from a user or role.
Parameters
- name (str) –
- Name of the user or role from which the permission will be revoked. Must be an existing user or role.
- permission (str) –
Permission to revoke from the user or role. Allowed values are:
- system_admin – Full access to all data and system functions.
- system_user_admin – Access to administer users and roles that do not have system_admin permission.
- system_write – Read and write access to all tables.
- system_read – Read-only access to all tables.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- permission (str) –
- Value of input parameter permission.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
revoke_permission_table
(name=None, permission=None, table_name=None, options={})[source]¶ Revokes a table-level permission from a user or role.
Parameters
- name (str) –
- Name of the user or role from which the permission will be revoked. Must be an existing user or role.
- permission (str) –
Permission to revoke from the user or role. Allowed values are:
- table_admin – Full read/write and administrative access to the table.
- table_insert – Insert access to the table.
- table_update – Update access to the table.
- table_delete – Delete access to the table.
- table_read – Read access to the table.
- table_name (str) –
- Name of the table to which the permission grants access. Must be an existing table, collection, or view.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- columns – Apply security to these columns, comma-separated. The default value is ‘’.
Returns
A dict with the following entries–
- name (str) –
- Value of input parameter name.
- permission (str) –
- Value of input parameter permission.
- table_name (str) –
- Value of input parameter table_name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
revoke_role
(role=None, member=None, options={})[source]¶ Revokes membership in a role from a user or role.
Parameters
- role (str) –
- Name of the role in which membership will be revoked. Must be an existing role.
- member (str) –
- Name of the user or role that will be revoked membership in input parameter role. Must be an existing user or role.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- role (str) –
- Value of input parameter role.
- member (str) –
- Value of input parameter member.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_graph
(graph_name='', options={})[source]¶ Shows information and characteristics of graphs that exist on the graph server.
Parameters
- graph_name (str) –
- Name of the graph on which to retrieve information. If left as the default value, information about all graphs is returned. The default value is ‘’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
show_original_request – If set to true, the request that was originally used to create the graph is also returned as JSON. Allowed values are:
- true
- false
The default value is ‘true’.
Returns
A dict with the following entries–
- result (bool) –
- Indicates a successf. This call will fails of the graph specified in the request does not exist.
- graph_names (list of str) –
- Name(s) of the graph(s).
- directed (list of bools) –
- Whether or not the edges of the graph have directions (bi-directional edges can still exist in directed graphs). Consult Directed Graphs for more details.
- num_nodes (list of longs) –
- Total number of nodes in the graph.
- num_edges (list of longs) –
- Total number of edges in the graph.
- is_persisted (list of bools) –
- Shows whether or not the graph is persisted (saved and loaded on launch).
- is_sync_db (list of bools) –
- Shows whether or not the graph is linked to the original tables that created it, and will potentially be re-created instead loaded from persist on launch.
- has_insert_table_monitor (list of bools) –
- Shows whether or not the graph has an insert table monitor attached to it.
- original_request (list of str) –
- The orignal client request used to create the graph (before any expression evaluation or separator processing).
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_proc
(proc_name='', options={})[source]¶ Shows information about a proc.
Parameters
- proc_name (str) –
- Name of the proc to show information about. If specified, must be the name of a currently existing proc. If not specified, information about all procs will be returned. The default value is ‘’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
include_files – If set to true, the files that make up the proc will be returned. If set to false, the files will not be returned. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- proc_names (list of str) –
- The proc names.
- execution_modes (list of str) –
The execution modes of the procs named in output parameter proc_names. Allowed values are:
- @INNER_STRUCTURE
- files (list of dicts of str to str) –
- Maps of the files that make up the procs named in output parameter proc_names.
- commands (list of str) –
- The commands (excluding arguments) that will be invoked when the procs named in output parameter proc_names are executed.
- args (list of lists of str) –
- Arrays of command-line arguments that will be passed to the procs named in output parameter proc_names when executed.
- options (list of dicts of str to str) –
- The optional parameters for the procs named in output parameter proc_names.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_proc_status
(run_id='', options={})[source]¶ Shows the statuses of running or completed proc instances. Results are grouped by run ID (as returned from
execute_proc()
) and data segment ID (each invocation of the proc command on a data segment is assigned a data segment ID).Parameters
- run_id (str) –
- The run ID of a specific proc instance for which the status will be returned. If a proc with a matching run ID is not found, the response will be empty. If not specified, the statuses of all executed proc instances will be returned. The default value is ‘’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
clear_complete – If set to true, if a proc instance has completed (either successfully or unsuccessfully) then its status will be cleared and no longer returned in subsequent calls. Allowed values are:
- true
- false
The default value is ‘false’.
run_tag – If input parameter run_id is specified, return the status for a proc instance that has a matching run ID and a matching run tag that was provided to
execute_proc()
. If input parameter run_id is not specified, return statuses for all proc instances where a matching run tag was provided toexecute_proc()
. The default value is ‘’.
Returns
A dict with the following entries–
- proc_names (dict of str to str) –
- The proc names corresponding to the returned run IDs.
- params (dict of str to dicts of str to str) –
- The string params passed to
execute_proc()
for the returned run IDs. - bin_params (dict of str to dicts of str to str) –
- The binary params passed to
execute_proc()
for the returned run IDs. - input_table_names (dict of str to lists of str) –
- The input table names passed to
execute_proc()
for the returned run IDs. - input_column_names (dict of str to dicts of str to lists of str) –
- The input column names passed to
execute_proc()
for the returned run IDs, supplemented with the column names for input tables not included in the input column name map. - output_table_names (dict of str to lists of str) –
- The output table names passed to
execute_proc()
for the returned run IDs. - options (dict of str to dicts of str to str) –
- The optional parameters passed to
execute_proc()
for the returned run IDs. - overall_statuses (dict of str to str) –
Overall statuses for the returned run IDs. Note that these are rollups and individual statuses may differ between data segments for the same run ID; see output parameter statuses and output parameter messages for statuses from individual data segments. Allowed values are:
- running – The proc instance is currently running.
- complete – The proc instance completed with no errors.
- killed – The proc instance was killed before completion.
- error – The proc instance failed with an error.
- statuses (dict of str to dicts of str to str) –
- Statuses for the returned run IDs, grouped by data segment ID.
- messages (dict of str to dicts of str to str) –
- Messages containing additional status information for the returned run IDs, grouped by data segment ID.
- results (dict of str to dicts of str to dicts of str to str) –
- String results for the returned run IDs, grouped by data segment ID.
- bin_results (dict of str to dicts of str to dicts of str to str) –
- Binary results for the returned run IDs, grouped by data segment ID.
- timings (dict of str to dicts of str to dicts of str to longs) –
- Timing information for the returned run IDs, grouped by data segment ID.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_resource_statistics
(options={})[source]¶ Requests various statistics for storage/memory tiers and resource groups. Returns statistics on a per-rank basis.
Parameters
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- statistics_map (dict of str to str) –
- Map of resource statistics
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_resource_groups
(names=None, options={})[source]¶ Requests resource group properties. Returns detailed information about the requested resource groups.
Parameters
- names (list of str) –
- List of names of groups to be shown. A single entry with an empty string returns all groups. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
show_default_values – If true include values of fields that are based on the default resource group. Allowed values are:
- true
- false
The default value is ‘true’.
show_default_group – If true include the default resource group in the response. Allowed values are:
- true
- false
The default value is ‘true’.
Returns
A dict with the following entries–
- groups (list of dicts of str to str) –
- Map of resource group information.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_security
(names=None, options={})[source]¶ Shows security information relating to users and/or roles. If the caller is not a system administrator, only information relating to the caller and their roles is returned.
Parameters
- names (list of str) –
- A list of names of users and/or roles about which security information is requested. If none are provided, information about all users and roles will be returned. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- types (dict of str to str) –
Map of user/role name to the type of that user/role. Allowed values are:
- internal_user – A user whose credentials are managed by the database system.
- external_user – A user whose credentials are managed by an external LDAP.
- role – A role.
- roles (dict of str to lists of str) –
- Map of user/role name to a list of names of roles of which that user/role is a member.
- permissions (dict of str to lists of dicts of str to str) –
- Map of user/role name to a list of permissions directly granted to that user/role.
- resource_groups (dict of str to str) –
- Map of user name to resource group name.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_sql_proc
(procedure_name='', options={})[source]¶ Shows information about SQL procedures, including the full definition of each requested procedure.
Parameters
- procedure_name (str) –
- Name of the procedure for which to retrieve the information. If blank, then information about all procedures is returned. The default value is ‘’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
no_error_if_not_exists – If true, no error will be returned if the requested procedure does not exist. If false, an error will be returned if the requested procedure does not exist. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- procedure_names (list of str) –
- A list of the names of the requested procedures.
- procedure_definitions (list of str) –
- A list of the definitions for the requested procedures.
- additional_info (list of dicts of str to str) –
Additional information about the respective tables in the requested procedures. Allowed values are:
- @INNER_STRUCTURE
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_statistics
(table_names=None, options={})[source]¶ Retrieves the collected column statistics for the specified table.
Parameters
- table_names (list of str) –
- Tables whose metadata will be fetched. All provided tables must exist, or an error is returned. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- table_names (list of str) –
- Value of input parameter table_names.
- stastistics_map (list of lists of dicts of str to str) –
- A list of maps which contain the column statistics of the table input parameter table_names.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_system_properties
(options={})[source]¶ Returns server configuration and version related information to the caller. The admin tool uses it to present server related information to the user.
Parameters
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
- properties – A list of comma separated names of properties requested. If not specified, all properties will be returned.
Returns
A dict with the following entries–
- property_map (dict of str to str) –
A map of server configuration parameters and version information. Allowed keys are:
- conf.enable_worker_http_servers –
Boolean value indicating whether the system is configured for
multi-head ingestion.
Allowed values are:
- TRUE – Indicates that the system is configured for multi-head ingestion.
- FALSE – Indicates that the system is NOT configured for multi-head ingestion.
- conf.worker_http_server_ips – Semicolon (‘;’) separated string of IP addresses of all the ingestion-enabled worker heads of the system.
- conf.worker_http_server_ports – Semicolon (‘;’) separated string of the port numbers of all the ingestion-enabled worker ranks of the system.
- conf.hm_http_port – The host manager port number (an integer value).
- conf.enable_ha – Flag indicating whether high availability (HA) is set up (a boolean value).
- conf.ha_ring_head_nodes – A comma-separated string of high availability (HA) ring node URLs. If HA is not set up, then an empty string.
- conf.enable_worker_http_servers –
Boolean value indicating whether the system is configured for
multi-head ingestion.
Allowed values are:
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_system_status
(options={})[source]¶ Provides server configuration and health related status to the caller. The admin tool uses it to present server related information to the user.
Parameters
- options (dict of str to str) –
- Optional parameters, currently unused. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- status_map (dict of str to str) –
- A map of server configuration and health related status.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_system_timing
(options={})[source]¶ Returns the last 100 database requests along with the request timing and internal job id. The admin tool uses it to present request timing information to the user.
Parameters
- options (dict of str to str) –
- Optional parameters, currently unused. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- endpoints (list of str) –
- List of recently called endpoints, most recent first.
- time_in_ms (list of floats) –
- List of time (in ms) of the recent requests.
- jobIds (list of str) –
- List of the internal job ids for the recent requests.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_table
(table_name=None, options={})[source]¶ Retrieves detailed information about a table, view, or collection, specified in input parameter table_name. If the supplied input parameter table_name is a collection, the call can return information about either the collection itself or the tables and views it contains. If input parameter table_name is empty, information about all collections and top-level tables and views can be returned.
If the option get_sizes is set to true, then the number of records in each table is returned (in output parameter sizes and output parameter full_sizes), along with the total number of objects across all requested tables (in output parameter total_size and output parameter total_full_size).
For a collection, setting the show_children option to false returns only information about the collection itself; setting show_children to true returns a list of tables and views contained in the collection, along with their corresponding detail.
To retrieve a list of every table, view, and collection in the database, set input parameter table_name to ‘*’ and show_children to true.
Parameters
- table_name (str) –
- Name of the table for which to retrieve the information. If blank, then information about all collections and top-level tables and views is returned.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
force_synchronous – If true then the table sizes will wait for read lock before returning. Allowed values are:
- true
- false
The default value is ‘true’.
get_sizes – If true then the number of records in each table, along with a cumulative count, will be returned; blank, otherwise. Allowed values are:
- true
- false
The default value is ‘false’.
show_children – If input parameter table_name is a collection, then true will return information about the children of the collection, and false will return information about the collection itself. If input parameter table_name is a table or view, show_children must be false. If input parameter table_name is empty, then show_children must be true. Allowed values are:
- true
- false
The default value is ‘true’.
no_error_if_not_exists – If false will return an error if the provided input parameter table_name does not exist. If true then it will return an empty result. Allowed values are:
- true
- false
The default value is ‘false’.
get_column_info – If true then column info (memory usage, etc) will be returned. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
- table_name (str) –
- Value of input parameter table_name.
- table_names (list of str) –
- If input parameter table_name is a table or view, then the single element of the array is input parameter table_name. If input parameter table_name is a collection and show_children is set to true, then this array is populated with the names of all tables and views contained by the given collection; if show_children is false then this array will only include the collection name itself. If input parameter table_name is an empty string, then the array contains the names of all collections and top-level tables.
- table_descriptions (list of lists of str) –
List of descriptions for the respective tables in output parameter table_names. Allowed values are:
- COLLECTION
- VIEW
- REPLICATED
- JOIN
- RESULT_TABLE
- MATERIALIZED_VIEW
- MATERIALIZED_VIEW_MEMBER
- MATERIALIZED_VIEW_UNDER_CONSTRUCTION
- type_ids (list of str) –
- Type ids of the respective tables in output parameter table_names.
- type_schemas (list of str) –
- Type schemas of the respective tables in output parameter table_names.
- type_labels (list of str) –
- Type labels of the respective tables in output parameter table_names.
- properties (list of dicts of str to lists of str) –
- Property maps of the respective tables in output parameter table_names.
- additional_info (list of dicts of str to str) –
Additional information about the respective tables in output parameter table_names. Allowed values are:
- @INNER_STRUCTURE
- sizes (list of longs) –
- If get_sizes is true, an array containing the number of records of each corresponding table in output parameter table_names. Otherwise, an empty array.
- full_sizes (list of longs) –
- If get_sizes is true, an array containing the number of records of each corresponding table in output parameter table_names (same values as output parameter sizes). Otherwise, an empty array.
- join_sizes (list of floats) –
- If get_sizes is true, an array containing the number of unfiltered records in the cross product of the sub-tables of each corresponding join-table in output parameter table_names. For simple tables, this number will be the same as output parameter sizes. For join-tables, this value gives the number of joined-table rows that must be processed by any aggregate functions operating on the table. Otherwise, (if get_sizes is false), an empty array.
- total_size (long) –
- If get_sizes is true, the sum of the elements of output parameter sizes. Otherwise, -1.
- total_full_size (long) –
- If get_sizes is true, the sum of the elements of output parameter full_sizes (same value as output parameter total_size). Otherwise, -1.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_table_metadata
(table_names=None, options={})[source]¶ Retrieves the user provided metadata for the specified tables.
Parameters
- table_names (list of str) –
- Tables whose metadata will be fetched. All provided tables must exist, or an error is returned. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- table_names (list of str) –
- Value of input parameter table_names.
- metadata_maps (list of dicts of str to str) –
- A list of maps which contain the metadata of the tables in the order the tables are listed in input parameter table_names. Each map has (metadata attribute name, metadata attribute value) pairs.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_tables_by_type
(type_id=None, label=None, options={})[source]¶ Gets names of the tables whose type matches the given criteria. Each table has a particular type. This type comprises the schema and properties of the table and sometimes a type label. This function allows a look up of the existing tables based on full or partial type information. The operation is synchronous.
Parameters
- type_id (str) –
- Type id returned by a call to
create_type()
. - label (str) –
- Optional user supplied label which can be used instead of the type_id to retrieve all tables with the given label.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- table_names (list of str) –
- List of tables matching the input criteria.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_triggers
(trigger_ids=None, options={})[source]¶ Retrieves information regarding the specified triggers or all existing triggers currently active.
Parameters
- trigger_ids (list of str) –
- List of IDs of the triggers whose information is to be retrieved. An empty list means information will be retrieved on all active triggers. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
- trigger_map (dict of str to dicts of str to str) –
- This dictionary contains (key, value) pairs of (trigger ID, information map/dictionary) where the key is a Unicode string representing a Trigger ID. The value is another embedded dictionary containing (key, value) pairs where the keys consist of ‘table_name’, ‘type’ and the parameter names relating to the trigger type, e.g. nai, min, max. The values are unicode strings (numeric values are also converted to strings) representing the value of the respective parameter. If a trigger is associated with multiple tables, then the string value for table_name contains a comma separated list of table names.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
show_types
(type_id=None, label=None, options={})[source]¶ Retrieves information for the specified data type ID or type label. For all data types that match the input criteria, the database returns the type ID, the type schema, the label (if available), and the type’s column properties.
Parameters
- type_id (str) –
- Type Id returned in response to a call to
create_type()
. - label (str) –
- Option string that was supplied by user in a call to
create_type()
. - options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
no_join_types – When set to ‘true’, no join types will be included. Allowed values are:
- true
- false
The default value is ‘false’.
Returns
A dict with the following entries–
type_ids (list of str)
type_schemas (list of str)
labels (list of str)
properties (list of dicts of str to lists of str)
- info (dict of str to str) –
- Additional information.
-
GPUdb.
solve_graph
(graph_name=None, weights_on_edges=[], restrictions=[], solver_type='SHORTEST_PATH', source_nodes=[], destination_nodes=[], solution_table='graph_solutions', options={})[source]¶ Solves an existing graph for a type of problem (e.g., shortest path, page rank, travelling salesman, etc.) using source nodes, destination nodes, and additional, optional weights and restrictions.
IMPORTANT: It’s highly recommended that you review the Network Graphs & Solvers concepts documentation, the Graph REST Tutorial, and/or some /solve/graph examples before using this endpoint.
Parameters
- graph_name (str) –
- Name of the graph resource to solve.
- weights_on_edges (list of str) –
- Additional weights to apply to the edges of an existing graph. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS WEIGHTS_EDGE_ID’, expressions, e.g., ‘ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED’, or constant values, e.g., ‘{4, 15, 2} AS WEIGHTS_VALUESPECIFIED’. Any provided weights will be added (in the case of ‘WEIGHTS_VALUESPECIFIED’) to or multiplied with (in the case of ‘WEIGHTS_FACTORSPECIFIED’) the existing weight(s). If using constant values in an identifier combination, the number of values specified must match across the combination. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- restrictions (list of str) –
- Additional restrictions to apply to the nodes/edges of an existing graph. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS RESTRICTIONS_EDGE_ID’, expressions, e.g., ‘column/2 AS RESTRICTIONS_VALUECOMPARED’, or constant values, e.g., ‘{0, 0, 0, 1} AS RESTRICTIONS_ONOFFCOMPARED’. If using constant values in an identifier combination, the number of values specified must match across the combination. If remove_previous_restrictions is set to true, any provided restrictions will replace the existing restrictions. If remove_previous_restrictions is set to false, any provided restrictions will be added (in the case of ‘RESTRICTIONS_VALUECOMPARED’) to or replaced (in the case of ‘RESTRICTIONS_ONOFFCOMPARED’). The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- solver_type (str) –
The type of solver to use for the graph. Allowed values are:
- SHORTEST_PATH – Solves for the optimal (shortest) path based on weights and restrictions from one source to destinations nodes. Also known as the Dijkstra solver.
- PAGE_RANK – Solves for the probability of each destination node being visited based on the links of the graph topology. Weights are not required to use this solver.
- PROBABILITY_RANK – Solves for the transitional probability (Hidden Markov) for each node based on the weights (probability assigned over given edges).
- CENTRALITY – Solves for the degree of a node to depict how many pairs of individuals that would have to go through the node to reach one another in the minimum number of hops. Also known as betweenness.
- MULTIPLE_ROUTING – Solves for finding the minimum cost cumulative path for a round-trip starting from the given source and visiting each given destination node once then returning to the source. Also known as the travelling salesman problem.
- INVERSE_SHORTEST_PATH – Solves for finding the optimal path cost for each destination node to route to the source node. Also known as inverse Dijkstra or the service man routing problem.
- BACKHAUL_ROUTING – Solves for optimal routes that connect remote asset nodes to the fixed (backbone) asset nodes.
- ALLPATHS – Solves for paths that would give costs between max and min solution radia - Make sure to limit by the ‘max_solution_targets’ option. Min cost shoudl be >= shortest_path cost.
The default value is ‘SHORTEST_PATH’.
- source_nodes (list of str) –
- It can be one of the nodal identifiers - e.g: ‘NODE_WKTPOINT’ for source nodes. For BACKHAUL_ROUTING, this list depicts the fixed assets. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- destination_nodes (list of str) –
- It can be one of the nodal identifiers - e.g: ‘NODE_WKTPOINT’ for destination (target) nodes. For BACKHAUL_ROUTING, this list depicts the remote assets. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- solution_table (str) –
- Name of the table to store the solution. The default value is ‘graph_solutions’.
- options (dict of str to str) –
Additional parameters. The default value is an empty dict ( {} ). Allowed keys are:
max_solution_radius – For SHORTEST_PATH and INVERSE_SHORTEST_PATH solvers only. Sets the maximum solution cost radius, which ignores the input parameter destination_nodes list and instead outputs the nodes within the radius sorted by ascending cost. If set to ‘0.0’, the setting is ignored. The default value is ‘0.0’.
min_solution_radius – For SHORTEST_PATH and INVERSE_SHORTEST_PATH solvers only. Applicable only when max_solution_radius is set. Sets the minimum solution cost radius, which ignores the input parameter destination_nodes list and instead outputs the nodes within the radius sorted by ascending cost. If set to ‘0.0’, the setting is ignored. The default value is ‘0.0’.
max_solution_targets – For SHORTEST_PATH and INVERSE_SHORTEST_PATH solvers only. Sets the maximum number of solution targets, which ignores the input parameter destination_nodes list and instead outputs no more than n number of nodes sorted by ascending cost where n is equal to the setting value. If set to 0, the setting is ignored. The default value is ‘0’.
export_solve_results – Returns solution results inside the output parameter result_per_destination_node array in the response if set to true. Allowed values are:
- true
- false
The default value is ‘false’.
remove_previous_restrictions – Ignore the restrictions applied to the graph during the creation stage and only use the restrictions specified in this request if set to true. Allowed values are:
- true
- false
The default value is ‘false’.
restriction_threshold_value – Value-based restriction comparison. Any node or edge with a RESTRICTIONS_VALUECOMPARED value greater than the restriction_threshold_value will not be included in the solution.
uniform_weights – When specified, assigns the given value to all the edges in the graph. Note that weights provided in input parameter weights_on_edges will override this value.
left_turn_penalty – This will add an additonal weight over the edges labelled as ‘left turn’ if the ‘add_turn’ option parameter of the
create_graph()
was invoked at graph creation. The default value is ‘0.0’.right_turn_penalty – This will add an additonal weight over the edges labelled as’ right turn’ if the ‘add_turn’ option parameter of the
create_graph()
was invoked at graph creation. The default value is ‘0.0’.intersection_penalty – This will add an additonal weight over the edges labelled as ‘intersection’ if the ‘add_turn’ option parameter of the
create_graph()
was invoked at graph creation. The default value is ‘0.0’.sharp_turn_penalty – This will add an additonal weight over the edges labelled as ‘sharp turn’ or ‘u-turn’ if the ‘add_turn’ option parameter of the
create_graph()
was invoked at graph creation. The default value is ‘0.0’.num_best_paths – For MULTIPLE_ROUTING solvers only; sets the number of shortest paths computed from each node. This is the heuristic criterion. Default value of zero allows the number to be computed automatically by the solver. The user may want to override this parameter to speed-up the solver. The default value is ‘0’.
max_num_combinations – For MULTIPLE_ROUTING solvers only; sets the cap on the combinatorial sequences generated. If the default value of two millions is overridden to a lesser value, it can potentially speed up the solver. The default value is ‘2000000’.
accurate_snaps – Valid for single source destination pair solves if points are described in NODE_WKTPOINT identifier types: When true (default), it snaps to the nearest node of the graph; otherwise, it searches for the closest entity that could be an edge. For the latter case (false), the solver modifies the resulting cost with the weights proportional to the ratio of the snap location within the edge. This may be an over-kill when the performance is considered and the difference is well less than 1 percent. In batch runs, since the performance is of utmost importance, the option is always considered ‘false’. Allowed values are:
- true
- false
The default value is ‘true’.
output_edge_path – If true then concatenated edge ids will be added as the EDGE path column of the solution table for each source and target pair in shortest path solves. Allowed values are:
- true
- false
The default value is ‘false’.
output_wkt_path – If true then concatenated wkt line segments will be added as the Wktroute column of the solution table for each source and target pair in shortest path solves. Allowed values are:
- true
- false
The default value is ‘true’.
Returns
A dict with the following entries–
- result (bool) –
- Indicates a successful solution.
- result_per_destination_node (list of floats) –
- Cost or Pagerank (based on solver type) for each destination node requested. Only populated if export_solve_results is set to true.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
update_records
(table_name=None, expressions=None, new_values_maps=None, records_to_insert=[], records_to_insert_str=[], record_encoding='binary', options={}, record_type=None)[source]¶ Runs multiple predicate-based updates in a single call. With the list of given expressions, any matching record’s column values will be updated as provided in input parameter new_values_maps. There is also an optional ‘upsert’ capability where if a particular predicate doesn’t match any existing record, then a new record can be inserted.
Note that this operation can only be run on an original table and not on a collection or a result view.
This operation can update primary key values. By default only ‘pure primary key’ predicates are allowed when updating primary key values. If the primary key for a table is the column ‘attr1’, then the operation will only accept predicates of the form: “attr1 == ‘foo’” if the attr1 column is being updated. For a composite primary key (e.g. columns ‘attr1’ and ‘attr2’) then this operation will only accept predicates of the form: “(attr1 == ‘foo’) and (attr2 == ‘bar’)”. Meaning, all primary key columns must appear in an equality predicate in the expressions. Furthermore each ‘pure primary key’ predicate must be unique within a given request. These restrictions can be removed by utilizing some available options through input parameter options.Note that this operation can only be run on an original table and not on a collection or a result view.
The update_on_existing_pk option specifies the record collision policy for tables with a primary key, and is ignored on tables with no primary key.
Parameters
- table_name (str) –
- Table to be updated. Must be a currently existing table and not a collection or view.
- expressions (list of str) –
- A list of the actual predicates, one for each update; format
should follow the guidelines
here
. The user can provide a single element (which will be automatically promoted to a list internally) or a list. The user can provide a single element (which will be automatically promoted to a list internally) or a list. - new_values_maps (list of dicts of str to str and/or None) –
- List of new values for the matching records. Each element is a map with (key, value) pairs where the keys are the names of the columns whose values are to be updated; the values are the new values. The number of elements in the list should match the length of input parameter expressions. The user can provide a single element (which will be automatically promoted to a list internally) or a list. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- records_to_insert (list of str) –
- An optional list of new binary-avro encoded records to insert, one for each update. If one of input parameter expressions does not yield a matching record to be updated, then the corresponding element from this list will be added to the table. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- records_to_insert_str (list of str) –
- An optional list of new json-avro encoded objects to insert, one for each update, to be added to the set if the particular update did not affect any objects. The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- record_encoding (str) –
Identifies which of input parameter records_to_insert and input parameter records_to_insert_str should be used. Allowed values are:
- binary
- json
The default value is ‘binary’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
global_expression – An optional global expression to reduce the search space of the predicates listed in input parameter expressions. The default value is ‘’.
bypass_safety_checks – When set to true, all predicates are available for primary key updates. Keep in mind that it is possible to destroy data in this case, since a single predicate may match multiple objects (potentially all of records of a table), and then updating all of those records to have the same primary key will, due to the primary key uniqueness constraints, effectively delete all but one of those updated records. Allowed values are:
- true
- false
The default value is ‘false’.
update_on_existing_pk – Specifies the record collision policy for tables with a primary key when updating columns of the primary key or inserting new records. If true, existing records with primary key values that match those of a record being updated or inserted will be replaced by the updated and new records. If false, existing records with matching primary key values will remain unchanged, and the updated or new records with primary key values that match those of existing records will be discarded. If the specified table does not have a primary key, then this option has no effect. Allowed values are:
- true – Overwrite existing records when updated and inserted records have the same primary keys
- false – Discard updated and inserted records when the same primary keys already exist
The default value is ‘false’.
update_partition – Force qualifying records to be deleted and reinserted so their partition membership will be reevaluated. Allowed values are:
- true
- false
The default value is ‘false’.
truncate_strings – If set to true, any strings which are too long for their charN string fields will be truncated to fit. Allowed values are:
- true
- false
The default value is ‘false’.
use_expressions_in_new_values_maps – When set to true, all new values in input parameter new_values_maps are considered as expression values. When set to false, all new values in input parameter new_values_maps are considered as constants. NOTE: When true, string constants will need to be quoted to avoid being evaluated as expressions. Allowed values are:
- true
- false
The default value is ‘false’.
record_id – ID of a single record to be updated (returned in the call to
insert_records()
orget_records_from_collection()
).
- record_type (RecordType) –
- A
RecordType
object using which the the binary data will be encoded. If None, then it is assumed that the data is already encoded, and no further encoding will occur. Default is None.
Returns
A dict with the following entries–
- count_updated (long) –
- Total number of records updated.
- counts_updated (list of longs) –
- Total number of records updated per predicate in input parameter expressions.
- count_inserted (long) –
- Total number of records inserted (due to expressions not matching any existing records).
- counts_inserted (list of longs) –
- Total number of records inserted per predicate in input parameter expressions (will be either 0 or 1 for each expression).
- info (dict of str to str) –
- Additional information.
-
GPUdb.
update_records_by_series
(table_name=None, world_table_name=None, view_name='', reserved=[], options={})[source]¶ Updates the view specified by input parameter table_name to include full series (track) information from the input parameter world_table_name for the series (tracks) present in the input parameter view_name.
Parameters
- table_name (str) –
- Name of the view on which the update operation will be performed. Must be an existing view.
- world_table_name (str) –
- Name of the table containing the complete series (track) information.
- view_name (str) –
- name of the view containing the series (tracks) which have to be updated. The default value is ‘’.
- reserved (list of str) –
- The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- options (dict of str to str) –
- Optional parameters. The default value is an empty dict ( {} ).
Returns
A dict with the following entries–
count (int)
- info (dict of str to str) –
- Additional information.
-
GPUdb.
visualize_image_chart
(table_name=None, x_column_names=None, y_column_names=None, min_x=None, max_x=None, min_y=None, max_y=None, width=None, height=None, bg_color=None, style_options=None, options={})[source]¶ Scatter plot is the only plot type currently supported. A non-numeric column can be specified as x or y column and jitters can be added to them to avoid excessive overlapping. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). The image is contained in the output parameter image_data field.
Parameters
- table_name (str) –
- Name of the table containing the data to be drawn as a chart.
- x_column_names (list of str) –
- Names of the columns containing the data mapped to the x axis of a chart. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- y_column_names (list of str) –
- Names of the columns containing the data mapped to the y axis of a chart. The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- min_x (float) –
- Lower bound for the x column values. For non-numeric x column, each x column item is mapped to an integral value starting from 0.
- max_x (float) –
- Upper bound for the x column values. For non-numeric x column, each x column item is mapped to an integral value starting from 0.
- min_y (float) –
- Lower bound for the y column values. For non-numeric y column, each y column item is mapped to an integral value starting from 0.
- max_y (float) –
- Upper bound for the y column values. For non-numeric y column, each y column item is mapped to an integral value starting from 0.
- width (int) –
- Width of the generated image in pixels.
- height (int) –
- Height of the generated image in pixels.
- bg_color (str) –
- Background color of the generated image.
- style_options (dict of str to lists of str) –
Rendering style options for a chart. Allowed keys are:
pointcolor – The color of points in the plot represented as a hexadecimal number. The default value is ‘0000FF’.
pointsize – The size of points in the plot represented as number of pixels. The default value is ‘3’.
pointshape – The shape of points in the plot. Allowed values are:
- none
- circle
- square
- diamond
- hollowcircle
- hollowsquare
- hollowdiamond
The default value is ‘square’.
cb_pointcolors – Point color class break information consisting of three entries: class-break attribute, class-break values/ranges, and point color values. This option overrides the pointcolor option if both are provided. Class-break ranges are represented in the form of “min:max”. Class-break values/ranges and point color values are separated by cb_delimiter, e.g. {“price”, “20:30;30:40;40:50”, “0xFF0000;0x00FF00;0x0000FF”}.
cb_pointsizes – Point size class break information consisting of three entries: class-break attribute, class-break values/ranges, and point size values. This option overrides the pointsize option if both are provided. Class-break ranges are represented in the form of “min:max”. Class-break values/ranges and point size values are separated by cb_delimiter, e.g. {“states”, “NY;TX;CA”, “3;5;7”}.
cb_pointshapes – Point shape class break information consisting of three entries: class-break attribute, class-break values/ranges, and point shape names. This option overrides the pointshape option if both are provided. Class-break ranges are represented in the form of “min:max”. Class-break values/ranges and point shape names are separated by cb_delimiter, e.g. {“states”, “NY;TX;CA”, “circle;square;diamond”}.
cb_delimiter – A character or string which separates per-class values in a class-break style option string. The default value is ‘;’.
x_order_by – An expression or aggregate expression by which non-numeric x column values are sorted, e.g. “avg(price) descending”.
y_order_by – An expression or aggregate expression by which non-numeric y column values are sorted, e.g. “avg(price)”, which defaults to “avg(price) ascending”.
scale_type_x – Type of x axis scale. Allowed values are:
- none – No scale is applied to the x axis.
- log – A base-10 log scale is applied to the x axis.
The default value is ‘none’.
scale_type_y – Type of y axis scale. Allowed values are:
- none – No scale is applied to the y axis.
- log – A base-10 log scale is applied to the y axis.
The default value is ‘none’.
min_max_scaled – If this options is set to “false”, this endpoint expects request’s min/max values are not yet scaled. They will be scaled according to scale_type_x or scale_type_y for response. If this options is set to “true”, this endpoint expects request’s min/max values are already scaled according to scale_type_x/scale_type_y. Response’s min/max values will be equal to request’s min/max values. The default value is ‘false’.
jitter_x – Amplitude of horizontal jitter applied to non-numeric x column values. The default value is ‘0.0’.
jitter_y – Amplitude of vertical jitter applied to non-numeric y column values. The default value is ‘0.0’.
plot_all – If this options is set to “true”, all non-numeric column values are plotted ignoring min_x, max_x, min_y and max_y parameters. The default value is ‘false’.
- options (dict of str to str) –
Optional parameters. The default value is an empty dict ( {} ). Allowed keys are:
image_encoding – Encoding to be applied to the output image. When using JSON serialization it is recommended to specify this as base64. Allowed values are:
- base64 – Apply base64 encoding to the output image.
- none – Do not apply any additional encoding to the output image.
The default value is ‘none’.
Returns
A dict with the following entries–
- min_x (float) –
- Lower bound for the x column values as provided in input parameter min_x or calculated for non-numeric columns when plot_all option is used.
- max_x (float) –
- Upper bound for the x column values as provided in input parameter max_x or calculated for non-numeric columns when plot_all option is used.
- min_y (float) –
- Lower bound for the y column values as provided in input parameter min_y or calculated for non-numeric columns when plot_all option is used.
- max_y (float) –
- Upper bound for the y column values as provided in input parameter max_y or calculated for non-numeric columns when plot_all option is used.
- width (int) –
- Width of the image as provided in input parameter width.
- height (int) –
- Height of the image as provided in input parameter height.
- bg_color (str) –
- Background color of the image as provided in input parameter bg_color.
- image_data (str) –
- The generated image data.
- axes_info (dict of str to lists of str) –
Information returned for drawing labels for the axes associated with non-numeric columns. Allowed keys are:
- sorted_x_values – Sorted non-numeric x column value list for drawing x axis label.
- location_x – X axis label positions of sorted_x_values in pixel coordinates.
- sorted_y_values – Sorted non-numeric y column value list for drawing y axis label.
- location_y – Y axis label positions of sorted_y_values in pixel coordinates.
- info (dict of str to str) –
- Additional information.
-
GPUdb.
visualize_isochrone
(graph_name=None, source_node=None, max_solution_radius='-1.0', weights_on_edges=[], restrictions=[], num_levels='1', generate_image=True, levels_table='', style_options=None, solve_options={}, contour_options={}, options={})[source]¶ Generate an image containing isolines for travel results using an existing graph. Isolines represent curves of equal cost, with cost typically referring to the time or distance assigned as the weights of the underlying graph. See Network Graphs & Solvers for more information on graphs. .
Parameters
- graph_name (str) –
- Name of the graph on which the isochrone is to be computed.
- source_node (str) –
- Starting vertex on the underlying graph from/to which the isochrones are created.
- max_solution_radius (float) –
- Extent of the search radius around input parameter source_node. Set to ‘-1.0’ for unrestricted search radius. The default value is -1.0.
- weights_on_edges (list of str) –
- Additional weights to apply to the edges of an existing graph. Weights must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS WEIGHTS_EDGE_ID’, or expressions, e.g., ‘ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED’. Any provided weights will be added (in the case of ‘WEIGHTS_VALUESPECIFIED’) to or multiplied with (in the case of ‘WEIGHTS_FACTORSPECIFIED’) the existing weight(s). The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- restrictions (list of str) –
- Additional restrictions to apply to the nodes/edges of an existing graph. Restrictions must be specified using identifiers; identifiers are grouped as combinations. Identifiers can be used with existing column names, e.g., ‘table.column AS RESTRICTIONS_EDGE_ID’, or expressions, e.g., ‘column/2 AS RESTRICTIONS_VALUECOMPARED’. If remove_previous_restrictions is set to true, any provided restrictions will replace the existing restrictions. If remove_previous_restrictions is set to false, any provided restrictions will be added (in the case of ‘RESTRICTIONS_VALUECOMPARED’) to or replaced (in the case of ‘RESTRICTIONS_ONOFFCOMPARED’). The default value is an empty list ( [] ). The user can provide a single element (which will be automatically promoted to a list internally) or a list.
- num_levels (int) –
- Number of equally-separated isochrones to compute. The default value is 1.
- generate_image (bool) –
If set to true, generates a PNG image of the isochrones in the response. Allowed values are:
- true
- false
The default value is True.
- levels_table (str) –
- Name of the table to output the isochrones, containing levels and their corresponding WKT geometry. If no value is provided, the table is not generated. The default value is ‘’.
- style_options (dict of str to str) –
Various style related options of the isochrone image. Allowed keys are:
line_size – The width of the contour lines in pixels. The default value is ‘3’.
color – Color of generated isolines. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). If alpha is specified and flooded contours are enabled, it will be used for as the transparency of the latter. The default value is ‘FF696969’.
bg_color – When input parameter generate_image is set to true, background color of the generated image. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). The default value is ‘00000000’.
text_color – When add_labels is set to true, color for the labels. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). The default value is ‘FF000000’.
colormap – Colormap for contours or fill-in regions when applicable. All color values must be in the format RRGGBB or AARRGGBB (to specify the alpha value). Allowed values are:
- jet
- accent
- afmhot
- autumn
- binary
- blues
- bone
- brbg
- brg
- bugn
- bupu
- bwr
- cmrmap
- cool
- coolwarm
- copper
- cubehelix
- dark2
- flag
- gist_earth
- gist_gray
- gist_heat
- gist_ncar
- gist_rainbow
- gist_stern
- gist_yarg
- gnbu
- gnuplot2
- gnuplot
- gray
- greens
- greys
- hot
- hsv
- inferno
- magma
- nipy_spectral
- ocean
- oranges
- orrd
- paired
- pastel1
- pastel2
- pink
- piyg
- plasma
- prgn
- prism
- pubu
- pubugn
- puor
- purd
- purples
- rainbow
- rdbu
- rdgy
- rdpu
- rdylbu
- rdylgn
- reds
- seismic
- set1
- set2
- set3
- spectral
- spring
- summer
- terrain
- viridis
- winter
- wistia
- ylgn
- ylgnbu
- ylorbr
- ylorrd
The default value is ‘jet’.
- solve_options (dict of str to str) –
Solver specific parameters. The default value is an empty dict ( {} ). Allowed keys are:
remove_previous_restrictions – Ignore the restrictions applied to the graph during the creation stage and only use the restrictions specified in this request if set to true. Allowed values are:
- true
- false
The default value is ‘false’.
restriction_threshold_value – Value-based restriction comparison. Any node or edge with a ‘RESTRICTIONS_VALUECOMPARED’ value greater than the restriction_threshold_value will not be included in the solution.
uniform_weights – When specified, assigns the given value to all the edges in the graph. Note that weights provided in input parameter weights_on_edges will override this value.
- contour_options (dict of str to str) –
Solver specific parameters. The default value is an empty dict ( {} ). Allowed keys are:
projection – Spatial Reference System (i.e. EPSG Code). Allowed values are:
- 3857
- 102100
- 900913
- EPSG:4326
- PLATE_CARREE
- EPSG:900913
- EPSG:102100
- EPSG:3857
- WEB_MERCATOR
The default value is ‘PLATE_CARREE’.
width – When input parameter generate_image is set to true, width of the generated image. The default value is ‘512’.
height – When input parameter generate_image is set to true, height of the generated image. If the default value is used, the height is set to the value resulting from multiplying the aspect ratio by the width. The default value is ‘-1’.
search_radius – When interpolating the graph solution to generate the isochrone, neighborhood of influence of sample data (in percent of the image/grid). The default value is ‘20’.
grid_size – When interpolating the graph solution to generate the isochrone, number of subdivisions along the x axis when building the grid (the y is computed using the aspect ratio of the output image). The default value is ‘100’.
color_isolines – Color each isoline according to the colormap; otherwise, use the foreground color. Allowed values are:
- true
- false
The default value is ‘true’.
add_labels – If set to true, add labels to the isolines. Allowed values are:
- true
- false
The default value is ‘false’.
labels_font_size – When add_labels is set to true, size of the font (in pixels) to use for labels. The default value is ‘12’.
labels_font_family – When add_labels is set to true, font name to be used when adding labels. The default value is ‘arial’.
labels_search_window – When add_labels is set to true, a search window is used to rate the local quality of each isoline. Smooth, continuous, long stretches with relatively flat angles are favored. The provided value is multiplied by the labels_font_size to calculate the final window size. The default value is ‘4’.
labels_intralevel_separation – When add_labels is set to true, this value determines the distance (in multiples of the labels_font_size) to use when separating labels of different values. The default value is ‘4’.
labels_interlevel_separation – When add_labels is set to true, this value determines the distance (in percent of the total window size) to use when separating labels of the same value. The default value is ‘20’.
labels_max_angle – When add_labels is set to true, maximum angle (in degrees) from the vertical to use when adding labels. The default value is ‘60’.
- options (dict of str to str) –
Additional parameters. The default value is an empty dict ( {} ). Allowed keys are:
solve_table – Name of the table to host intermediate solve results containing the position and cost for each vertex in the graph. If the default value is used, a temporary table is created and deleted once the solution is calculated. The default value is ‘’.
is_replicated – If set to true, replicate the solve_table. Allowed values are:
- true
- false
The default value is ‘true’.
data_min_x – Lower bound for the x values. If not provided, it will be computed from the bounds of the input data.
data_max_x – Upper bound for the x values. If not provided, it will be computed from the bounds of the input data.
data_min_y – Lower bound for the y values. If not provided, it will be computed from the bounds of the input data.
data_max_y – Upper bound for the y values. If not provided, it will be computed from the bounds of the input data.
concavity_level – Factor to qualify the concavity of the isochrone curves. The lower the value, the more convex (with ‘0’ being completely convex and ‘1’ being the most concave). The default value is ‘0.5’.
use_priority_queue_solvers – sets the solver methods explicitly if true. Allowed values are:
- true – uses the solvers scheduled for ‘shortest_path’ and ‘inverse_shortest_path’ based on solve_direction
- false – uses the solvers ‘priority_queue’ and ‘inverse_priority_queue’ based on solve_direction
The default value is ‘false’.
solve_direction – Specify whether we are going to the source node, or starting from it. Allowed values are:
- from_source – Shortest path to get to the source (inverse Dijkstra)
- to_source – Shortest path to source (Dijkstra)
The default value is ‘from_source’.
Returns
A dict with the following entries–
- width (int) –
- Width of the image as provided in width.
- height (int) –
- Height of the image as provided in height.
- bg_color (long) –
- Background color of the image as provided in bg_color.
- image_data (str) –
- Generated contour image data.
- info (dict of str to str) –
- Additional information.
- solve_info (dict of str to str) –
- Additional information.
- contour_info (dict of str to str) –
- Additional information.