Constructor
new GPUdb(url, optionsopt)
Creates a GPUdb API object for the specified URL using the given options.
Once created, all options are immutable; to use a different URL or change
options, create a new instance. (Creating a new instance does not
communicate with the server and should not cause performance concerns.)
Parameters:
Name |
Type |
Attributes |
Description |
url |
String
|
Array.<String>
|
|
The URL of the GPUdb server (e.g.,
http://hostname:9191 ). May also be specified as
a list of urls; all urls in the list must be well formed. |
options |
Object
|
<optional>
|
A set of configurable options for the GPUdb API.
Properties
Name |
Type |
Attributes |
Description |
username |
String
|
<optional>
|
The username to be used for authentication
to GPUdb. This username will be sent with every GPUdb request
made via the API along with the specified password and may be
used for authorization decisions by the server if it is so
configured. If neither username nor password is specified, no
authentication will be performed. |
password |
String
|
<optional>
|
The password to be used for authentication
to GPUdb. This password will be sent with every GPUdb request
made via the API along with the specified username and may be
used for authorization decisions by the server if it is so
configured. If neither username nor password is specified, no
authentication will be performed. |
timeout |
Number
|
<optional>
|
The timeout value, in milliseconds, after
which requests to GPUdb will be aborted. A timeout value of
zero is interpreted as an infinite timeout. Note that timeout
is not suppored for synchronous requests, which will not
return until a response is received and cannot be aborted. |
|
- Source:
Classes
- FileHandler
- Type
Members
(readonly) END_OF_SET :Number
Constant used with certain requests to indicate that the maximum allowed
number of results should be returned.
Type:
- Source:
(readonly) api_version :String
The version number of the GPUdb JavaScript API.
Type:
- Source:
(readonly) getCookie :function
Function to get request cookie
Type:
- Source:
(readonly) hostname :String
The hostname of the current GPUdb server.
Type:
- Source:
(readonly) parsedUrls :Array.<ConnectionToken>
The URLs of the GPUdb servers.
Type:
- Source:
(readonly) password :String
The password used for authentication to GPUdb. Will be an empty
string if none was provided to the
GPUdb constructor.
Type:
- Source:
(readonly) pathname :String
The pathname of the current GPUdb server.
Type:
- Source:
(readonly) port :String
The port of the current GPUdb server.
Type:
- Source:
(readonly) protocol :String
The protocol of the current GPUdb server address.
Type:
- Source:
(readonly) setCookie :function
Function to set responce cookie
Type:
- Source:
(readonly) timeout :Number
The timeout value, in milliseconds, after which requests to GPUdb
will be aborted. A timeout of zero is interpreted as an infinite
timeout. Will be zero if none was provided to the
GPUdb constructor.
Type:
- Source:
(readonly) url :String
The URL of the current GPUdb server.
Type:
- Source:
(readonly) username :String
The username used for authentication to GPUdb. Will be an empty
string if none was provided to the
GPUdb contructor.
Type:
- Source:
Methods
SqlIterator(sql, batchSize, sqlOptions)
A generator function to iterate over the records returned by executing
an SQL statement passed as a parameter to the function.
Parameters:
Name |
Type |
Description |
sql |
String
|
The SQL statement to execute |
batchSize |
number
|
The number of records to fetch in each batch, defaults to 10,000 |
sqlOptions |
Map
|
A Map to pass in the SQL options |
- Source:
Adds an HTTP header to the map of additional HTTP headers to send to
the server with each endpoint request. If the header is already in the map,
its value is replaced with the specified value. The user is not allowed
to modify the following headers:
- 'Accept'
- 'Authorization'
- 'Content-type'
- 'X-Kinetica-Group'
Parameters:
Name |
Type |
Description |
header |
String
|
The custom header to add. |
header |
String
|
The value for the custom header to add. |
- Source:
admin_add_host(host_address, options, callback) → {Promise}
Adds a host to an existing cluster.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
host_address |
String
|
IP address of the host that will be added to
the cluster. This host must have installed the
same version of Kinetica as the cluster to
which it is being added. |
options |
Object
|
Optional parameters.
- 'dry_run': If set to
true , only validation checks will be
performed. No host is added.
Supported values:
The default value is 'false'.
- 'accepts_failover': If set to
true , the host will accept processes
(ranks, graph server, etc.) in the event of a
failover on another node in the cluster.
Supported values:
The default value is 'false'.
- 'public_address': The
publicly-accessible IP address for the host being
added, typically specified for clients using
multi-head operations. This setting is required if
any other host(s) in the cluster specify a public
address.
- 'host_manager_public_url': The
publicly-accessible full path URL to the host
manager on the host being added, e.g.,
'http://172.123.45.67:9300'. The default host
manager port can be found in the list of ports used by Kinetica.
- 'ram_limit': The desired RAM limit for
the host being added, i.e. the sum of RAM usage for
all processes on the host will not be able to
exceed this value. Supported units: K (thousand),
KB (kilobytes), M (million), MB (megabytes), G
(billion), GB (gigabytes); if no unit is provided,
the value is assumed to be in bytes. For example,
if
ram_limit is set to 10M, the
resulting RAM limit is 10 million bytes. Set
ram_limit to -1 to have no RAM limit.
- 'gpus': Comma-delimited list of GPU
indices (starting at 1) that are eligible for
running worker processes. If left blank, all GPUs
on the host being added will be eligible.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_add_host_request(request, callback) → {Promise}
Adds a host to an existing cluster.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_add_ranks(hosts, config_params, options, callback) → {Promise}
Add one or more ranks to an existing Kinetica cluster. The new ranks will
not contain any data initially (other than replicated tables) and will not
be assigned any shards. To rebalance data and shards across the cluster, use
GPUdb#admin_rebalance
.
The database must be offline for this operation, see
GPUdb#admin_offline
For example, if attempting to add three new ranks (two ranks on host
172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with
additional configuration parameters:
* hosts
would be an array including 172.123.45.67 in the first two indices
(signifying two ranks being added to host 172.123.45.67) and
172.123.45.68 in the last index (signifying one rank being added
to host 172.123.45.67)
* config_params
would be an array of maps, with each map corresponding to the ranks
being added in hosts
. The key of each map would be
the configuration parameter name and the value would be the
parameter's value, e.g. '{"rank.gpu":"1"}'
This endpoint's processing includes copying all replicated table data to the
new rank(s) and therefore could take a long time. The API call may time out
if run directly. It is recommended to run this endpoint asynchronously via
GPUdb#create_job
.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
hosts |
Array.<String>
|
Array of host IP addresses (matching a
hostN.address from the gpudb.conf file), or host
identifiers (e.g. 'host0' from the gpudb.conf
file), on which to add ranks to the cluster. The
hosts must already be in the cluster. If needed
beforehand, to add a new host to the cluster use
GPUdb#admin_add_host . Include the
same entry as many times as there are ranks to add
to the cluster, e.g., if two ranks on host
172.123.45.67 should be added, hosts
could look like '["172.123.45.67",
"172.123.45.67"]'. All ranks will be added
simultaneously, i.e. they're not added in the order
of this array. Each entry in this array corresponds
to the entry at the same index in the
config_params . |
config_params |
Array.<Object>
|
Array of maps containing configuration
parameters to apply to the new ranks
found in hosts . For example,
'{"rank.gpu":"2",
"tier.ram.rank.limit":"10000000000"}'.
Currently, the available parameters
are rank-specific parameters in the Network,
Hardware,
Text Search, and
RAM Tiered Storage
sections in the gpudb.conf file, with the
key exception of the 'rankN.host' settings
in the Network section that will be
determined by
hosts instead. Though many of
these configuration parameters typically
are affixed with
'rankN' in the gpudb.conf file (where N is
the rank number), the 'N' should be omitted
in
config_params as the new rank
number(s) are not allocated until the ranks
have been added
to the cluster. Each entry in this array
corresponds to the entry at the same index
in the
hosts . This array must either
be completely empty or have the same number
of elements as
the hosts . An empty
config_params array will
result in the new ranks being set
with default parameters. |
options |
Object
|
Optional parameters.
- 'dry_run': If
true , only
validation checks will be performed. No ranks are
added.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_add_ranks_request(request, callback) → {Promise}
Add one or more ranks to an existing Kinetica cluster. The new ranks will
not contain any data initially (other than replicated tables) and will not
be assigned any shards. To rebalance data and shards across the cluster, use
GPUdb#admin_rebalance
.
The database must be offline for this operation, see
GPUdb#admin_offline
For example, if attempting to add three new ranks (two ranks on host
172.123.45.67 and one rank on host 172.123.45.68) to a Kinetica cluster with
additional configuration parameters:
* hosts
would be an array including 172.123.45.67 in the first two indices
(signifying two ranks being added to host 172.123.45.67) and
172.123.45.68 in the last index (signifying one rank being added
to host 172.123.45.67)
* config_params
would be an array of maps, with each map corresponding to the ranks
being added in hosts
. The key of each map would be
the configuration parameter name and the value would be the
parameter's value, e.g. '{"rank.gpu":"1"}'
This endpoint's processing includes copying all replicated table data to the
new rank(s) and therefore could take a long time. The API call may time out
if run directly. It is recommended to run this endpoint asynchronously via
GPUdb#create_job
.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_alter_host(host, options, callback) → {Promise}
Alter properties on an existing host in the cluster. Currently, the only
property that can be altered is a hosts ability to accept failover
processes.
Parameters:
Name |
Type |
Description |
host |
String
|
Identifies the host this applies to. Can be the host
address, or formatted as 'hostN' where N is the host
number as specified in gpudb.conf |
options |
Object
|
Optional parameters
- 'accepts_failover': If set to
true , the host will accept processes
(ranks, graph server, etc.) in the event of a
failover on another node in the cluster.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_alter_host_request(request, callback) → {Promise}
Alter properties on an existing host in the cluster. Currently, the only
property that can be altered is a hosts ability to accept failover
processes.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_alter_jobs(job_ids, action, options, callback) → {Promise}
Perform the requested action on a list of one or more job(s). Based
on the type of job and the current state of execution, the action may not be
successfully executed. The final result of the attempted actions for each
specified job is returned in the status array of the response. See
Job Manager for more
information.
Parameters:
Name |
Type |
Description |
job_ids |
Array.<Number>
|
Jobs to be modified. |
action |
String
|
Action to be performed on the jobs specified by
job_ids.
Supported values:
|
options |
Object
|
Optional parameters.
- 'job_tag': Job tag returned in call to
create the job
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_alter_jobs_request(request, callback) → {Promise}
Perform the requested action on a list of one or more job(s). Based
on the type of job and the current state of execution, the action may not be
successfully executed. The final result of the attempted actions for each
specified job is returned in the status array of the response. See
Job Manager for more
information.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_backup_begin(options, callback) → {Promise}
Prepares the system for a backup by closing all open file handles after
allowing current active jobs to complete. When the database is in backup
mode, queries that result in a disk write operation will be blocked until
backup mode has been completed by using
GPUdb#admin_backup_end
.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_backup_begin_request(request, callback) → {Promise}
Prepares the system for a backup by closing all open file handles after
allowing current active jobs to complete. When the database is in backup
mode, queries that result in a disk write operation will be blocked until
backup mode has been completed by using
GPUdb#admin_backup_end
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_backup_end(options, callback) → {Promise}
Restores the system to normal operating mode after a backup has completed,
allowing any queries that were blocked to complete.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_backup_end_request(request, callback) → {Promise}
Restores the system to normal operating mode after a backup has completed,
allowing any queries that were blocked to complete.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_ha_refresh(options, callback) → {Promise}
Restarts the HA processing on the given cluster as a mechanism of accepting
breaking HA conf changes. Additionally the cluster is put into read-only
while HA is restarting.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_ha_refresh_request(request, callback) → {Promise}
Restarts the HA processing on the given cluster as a mechanism of accepting
breaking HA conf changes. Additionally the cluster is put into read-only
while HA is restarting.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_offline(offline, options, callback) → {Promise}
Take the system offline. When the system is offline, no user operations can
be performed with the exception of a system shutdown.
Parameters:
Name |
Type |
Description |
offline |
Boolean
|
Set to true if desired state is offline.
Supported values:
|
options |
Object
|
Optional parameters.
- 'flush_to_disk': Flush to disk when
going offline
Supported values:
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_offline_request(request, callback) → {Promise}
Take the system offline. When the system is offline, no user operations can
be performed with the exception of a system shutdown.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_rebalance(options, callback) → {Promise}
Rebalance the data in the cluster so that all nodes contain an equal
number of records approximately and/or rebalance the shards to be equally
distributed (as much as possible) across all the ranks.
The database must be offline for this operation, see
GPUdb#admin_offline
* If GPUdb#admin_rebalance
is invoked after a change is
made to the cluster, e.g., a host was added or removed,
sharded
data will be
evenly redistributed across the cluster by number of shards per rank
while unsharded data will be redistributed across the cluster by data
size per rank
* If GPUdb#admin_rebalance
is invoked at some point when unsharded data (a.k.a.
randomly-sharded)
in the cluster is unevenly distributed over time, sharded data will
not move while unsharded data will be redistributed across the
cluster by data size per rank
NOTE: Replicated data will not move as a result of this call
This endpoint's processing time depends on the amount of data in the system,
thus the API call may time out if run directly. It is recommended to run
this
endpoint asynchronously via GPUdb#create_job
.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters.
- 'rebalance_sharded_data': If
true , sharded data will be rebalanced
approximately equally across the cluster. Note that
for clusters with large amounts of sharded data,
this data transfer could be time consuming and
result in delayed query responses.
Supported values:
The default value is 'true'.
- 'rebalance_unsharded_data': If
true , unsharded data (a.k.a. randomly-sharded) will be
rebalanced approximately equally across the
cluster. Note that for clusters with large amounts
of unsharded data, this data transfer could be time
consuming and result in delayed query responses.
Supported values:
The default value is 'true'.
- 'table_includes': Comma-separated list
of unsharded table names to rebalance. Not
applicable to sharded tables because they are
always rebalanced. Cannot be used simultaneously
with
table_excludes . This parameter is
ignored if rebalance_unsharded_data is
false .
- 'table_excludes': Comma-separated list
of unsharded table names to not rebalance. Not
applicable to sharded tables because they are
always rebalanced. Cannot be used simultaneously
with
table_includes . This parameter is
ignored if rebalance_unsharded_data is
false .
- 'aggressiveness': Influences how much
data is moved at a time during rebalance. A higher
aggressiveness will complete the
rebalance faster. A lower
aggressiveness will take longer but
allow for better interleaving between the rebalance
and other queries. Valid values are constants from
1 (lowest) to 10 (highest). The default value is
'10'.
- 'compact_after_rebalance': Perform
compaction of deleted records once the rebalance
completes to reclaim memory and disk space. Default
is
true , unless
repair_incorrectly_sharded_data is set
to true .
Supported values:
The default value is 'true'.
- 'compact_only': If set to
true , ignore rebalance options and
attempt to perform compaction of deleted records to
reclaim memory and disk space without rebalancing
first.
Supported values:
The default value is 'false'.
- 'repair_incorrectly_sharded_data':
Scans for any data sharded incorrectly and
re-routes the data to the correct location. Only
necessary if
GPUdb#admin_verify_db
reports an error in sharding alignment. This can be
done as part of a typical rebalance after expanding
the cluster or in a standalone fashion when it is
believed that data is sharded incorrectly somewhere
in the cluster. Compaction will not be performed by
default when this is enabled. If this option is set
to true , the time necessary to
rebalance and the memory used by the rebalance may
increase.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_rebalance_request(request, callback) → {Promise}
Rebalance the data in the cluster so that all nodes contain an equal
number of records approximately and/or rebalance the shards to be equally
distributed (as much as possible) across all the ranks.
The database must be offline for this operation, see
GPUdb#admin_offline
* If GPUdb#admin_rebalance
is invoked after a change is
made to the cluster, e.g., a host was added or removed,
sharded
data will be
evenly redistributed across the cluster by number of shards per rank
while unsharded data will be redistributed across the cluster by data
size per rank
* If GPUdb#admin_rebalance
is invoked at some point when unsharded data (a.k.a.
randomly-sharded)
in the cluster is unevenly distributed over time, sharded data will
not move while unsharded data will be redistributed across the
cluster by data size per rank
NOTE: Replicated data will not move as a result of this call
This endpoint's processing time depends on the amount of data in the system,
thus the API call may time out if run directly. It is recommended to run
this
endpoint asynchronously via GPUdb#create_job
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_remove_host(host, options, callback) → {Promise}
Removes a host from an existing cluster. If the host to be removed has any
ranks running on it, the ranks must be removed using
GPUdb#admin_remove_ranks
or manually switched over to a new
host using
GPUdb#admin_switchover
prior to host removal. If
the host to be removed has the graph server or SQL planner running on it,
these must be manually switched over to a new host using
GPUdb#admin_switchover
.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
host |
String
|
Identifies the host this applies to. Can be the host
address, or formatted as 'hostN' where N is the host
number as specified in gpudb.conf |
options |
Object
|
Optional parameters.
- 'dry_run': If set to
true , only validation checks will be
performed. No host is removed.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_remove_host_request(request, callback) → {Promise}
Removes a host from an existing cluster. If the host to be removed has any
ranks running on it, the ranks must be removed using
GPUdb#admin_remove_ranks
or manually switched over to a new
host using
GPUdb#admin_switchover
prior to host removal. If
the host to be removed has the graph server or SQL planner running on it,
these must be manually switched over to a new host using
GPUdb#admin_switchover
.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_remove_ranks(ranks, options, callback) → {Promise}
Remove one or more ranks from an existing Kinetica cluster. All data
will be rebalanced to other ranks before the rank(s) is removed unless the
rebalance_sharded_data
or
rebalance_unsharded_data
parameters are set to
false
in the
options
, in which case the corresponding
sharded data
and/or unsharded data (a.k.a.
randomly-sharded) will be deleted.
The database must be offline for this operation, see
GPUdb#admin_offline
This endpoint's processing time depends on the amount of data in the system,
thus the API call may time out if run directly. It is recommended to run
this
endpoint asynchronously via GPUdb#create_job
.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
ranks |
Array.<String>
|
Each array value designates one or more ranks to
remove from the cluster. Values can be formatted as
'rankN' for a specific rank, 'hostN' (from the
gpudb.conf file) to remove all ranks on that host,
or the host IP address (hostN.address from the
gpub.conf file) which also removes all ranks on
that host. Rank 0 (the head rank) cannot be removed
(but can be moved to another host using
GPUdb#admin_switchover ). At least one
worker rank must be left in the cluster after the
operation. |
options |
Object
|
Optional parameters.
- 'rebalance_sharded_data': If
true , sharded data will be rebalanced
approximately equally across the cluster. Note that
for clusters with large amounts of sharded data,
this data transfer could be time consuming and
result in delayed query responses.
Supported values:
The default value is 'true'.
- 'rebalance_unsharded_data': If
true , unsharded data (a.k.a. randomly-sharded) will be
rebalanced approximately equally across the
cluster. Note that for clusters with large amounts
of unsharded data, this data transfer could be time
consuming and result in delayed query responses.
Supported values:
The default value is 'true'.
- 'aggressiveness': Influences how much
data is moved at a time during rebalance. A higher
aggressiveness will complete the
rebalance faster. A lower
aggressiveness will take longer but
allow for better interleaving between the rebalance
and other queries. Valid values are constants from
1 (lowest) to 10 (highest). The default value is
'10'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_remove_ranks_request(request, callback) → {Promise}
Remove one or more ranks from an existing Kinetica cluster. All data
will be rebalanced to other ranks before the rank(s) is removed unless the
rebalance_sharded_data
or
rebalance_unsharded_data
parameters are set to
false
in the
options
, in which case the corresponding
sharded data
and/or unsharded data (a.k.a.
randomly-sharded) will be deleted.
The database must be offline for this operation, see
GPUdb#admin_offline
This endpoint's processing time depends on the amount of data in the system,
thus the API call may time out if run directly. It is recommended to run
this
endpoint asynchronously via GPUdb#create_job
.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_show_alerts(num_alerts, options, callback) → {Promise}
Requests a list of the most recent alerts.
Returns lists of alert data, including timestamp and type.
Parameters:
Name |
Type |
Description |
num_alerts |
Number
|
Number of most recent alerts to request. The
response will include up to
num_alerts depending on how many
alerts there are in the system. A value of 0
returns all stored alerts. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_show_alerts_request(request, callback) → {Promise}
Requests a list of the most recent alerts.
Returns lists of alert data, including timestamp and type.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_show_cluster_operations(history_index, options, callback) → {Promise}
Requests the detailed status of the current operation (by default) or a
prior cluster operation specified by
history_index
.
Returns details on the requested cluster operation.
The response will also indicate how many cluster operations are stored in
the history.
Parameters:
Name |
Type |
Description |
history_index |
Number
|
Indicates which cluster operation to
retrieve. Use 0 for the most recent. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_show_cluster_operations_request(request, callback) → {Promise}
Requests the detailed status of the current operation (by default) or a
prior cluster operation specified by
history_index
.
Returns details on the requested cluster operation.
The response will also indicate how many cluster operations are stored in
the history.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_show_jobs(options, callback) → {Promise}
Get a list of the current jobs in GPUdb.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters.
- 'show_async_jobs': If
true , then the completed async jobs
are also included in the response. By default, once
the async jobs are completed they are no longer
included in the jobs list.
Supported values:
The default value is 'false'.
- 'show_worker_info': If
true , then information is also
returned from worker ranks. By default only status
from the head rank is returned.
Supported values:
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_show_jobs_request(request, callback) → {Promise}
Get a list of the current jobs in GPUdb.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_show_shards(options, callback) → {Promise}
Show the mapping of shards to the corresponding rank and tom. The response
message contains list of 16384 (total number of shards in the system) Rank
and TOM numbers corresponding to each shard.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_show_shards_request(request, callback) → {Promise}
Show the mapping of shards to the corresponding rank and tom. The response
message contains list of 16384 (total number of shards in the system) Rank
and TOM numbers corresponding to each shard.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_shutdown(exit_type, authorization, options, callback) → {Promise}
Exits the database server application.
Parameters:
Name |
Type |
Description |
exit_type |
String
|
Reserved for future use. User can pass an empty
string. |
authorization |
String
|
No longer used. User can pass an empty
string. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_shutdown_request(request, callback) → {Promise}
Exits the database server application.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_switchover(processes, destinations, options, callback) → {Promise}
Manually switch over one or more processes to another host. Individual ranks
or entire hosts may be moved to another host.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
processes |
Array.<String>
|
Indicates the process identifier to switch over
to another host. Options are
'hostN' and 'rankN' where 'N' corresponds to
the number associated with a host or rank in
the
Network section of the
gpudb.conf file; e.g.,
'host[N].address' or 'rank[N].host'. If 'hostN'
is provided, all processes on that host will be
moved to another host. Each entry in this array
will be switched over to the corresponding host
entry at the same index in
destinations . |
destinations |
Array.<String>
|
Indicates to which host to switch over each
corresponding process given in
processes . Each index must be
specified as 'hostN' where 'N' corresponds
to the number
associated with a host or rank in the Network section of the
gpudb.conf file; e.g., 'host[N].address'.
Each entry in this array will receive the
corresponding
process entry at the same index in
processes . |
options |
Object
|
Optional parameters.
- 'dry_run': If set to
true , only validation checks will be
performed. Nothing is switched over.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_switchover_request(request, callback) → {Promise}
Manually switch over one or more processes to another host. Individual ranks
or entire hosts may be moved to another host.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_verify_db(options, callback) → {Promise}
Verify database is in a consistent state. When inconsistencies or errors
are found, the verified_ok flag in the response is set to false and the list
of errors found is provided in the error_list.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters.
- 'rebuild_on_error': [DEPRECATED -- Use
the Rebuild DB feature of GAdmin instead.]
Supported values:
The default value is 'false'.
- 'verify_nulls': When
true , verifies that null values are
set to zero
Supported values:
The default value is 'false'.
- 'verify_persist': When
true , persistent objects will be
compared against their state in memory and workers
will be checked for orphaned table data in persist.
To check for orphaned worker data, either set
concurrent_safe in
options to true or place
the database offline.
Supported values:
The default value is 'false'.
- 'concurrent_safe': When
true , allows this endpoint to be run
safely with other concurrent database operations.
Other operations may be slower while this is
running.
Supported values:
The default value is 'true'.
- 'verify_rank0': If
true ,
compare rank0 table metadata against workers'
metadata
Supported values:
The default value is 'false'.
- 'delete_orphaned_tables': If
true , orphaned table directories found
on workers for which there is no corresponding
metadata will be deleted. Must set
verify_persist in options
to true . It is recommended to run this
while the database is offline OR set
concurrent_safe in
options to true
Supported values:
The default value is 'false'.
- 'verify_orphaned_tables_only': If
true , only the presence of orphaned
table directories will be checked, all persistence
checks will be skipped
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
admin_verify_db_request(request, callback) → {Promise}
Verify database is in a consistent state. When inconsistencies or errors
are found, the verified_ok flag in the response is set to false and the list
of errors found is provided in the error_list.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_convex_hull(table_name, x_column_name, y_column_name, options, callback) → {Promise}
Calculates and returns the convex hull for the values in a table specified
by table_name
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of table on which the operation will be
performed. Must be an existing table, in
[schema_name.]table_name format, using standard
name resolution rules. |
x_column_name |
String
|
Name of the column containing the x
coordinates of the points for the operation
being performed. |
y_column_name |
String
|
Name of the column containing the y
coordinates of the points for the operation
being performed. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_convex_hull_request(request, callback) → {Promise}
Calculates and returns the convex hull for the values in a table specified
by table_name
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_group_by(table_name, column_names, offset, limit, options, callback) → {Promise}
Calculates unique combinations (groups) of values for the given columns in a
given table or view and computes aggregates on each unique combination. This
is somewhat analogous to an SQL-style SELECT...GROUP BY.
For aggregation details and examples, see Aggregation. For
limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except
unrestricted-length strings may be used for computing applicable aggregates;
columns marked as store-only are unable to be used in grouping or
aggregation.
The results can be paged via the offset
and limit
parameters. For example, to get 10 groups with the largest counts the inputs
would be: limit=10, options={"sort_order":"descending", "sort_by":"value"}.
options
can be used to customize behavior of this call e.g.
filtering or sorting the results.
To group by columns 'x' and 'y' and compute the number of objects within
each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use:
column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg,
mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min,
arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to having
.
The response is returned as a dynamic schema. For details see: dynamic schemas
documentation.
If a result_table
name is specified in the
options
, the results are stored in a new table with that
name--no results are returned in the response. Both the table name and
resulting column names must adhere to standard naming
conventions; column/aggregation expressions will need to be aliased. If
the source table's shard key is used as the grouping column(s) and all result
records are selected (offset
is 0 and limit
is
-9999), the result table will be sharded, in all other cases it will be
replicated. Sorting will properly function only if the result table is
replicated or if there is only one processing node and should not be relied
upon in other cases. Not available when any of the values of
column_names
is an unrestricted-length string.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of an existing table or view on which the
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. |
column_names |
Array.<String>
|
List of one or more column names,
expressions, and aggregate expressions. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned, or
END_OF_SET (-9999) to indicate that the maximum
number of results allowed by the server should be
returned. The number of records returned will never
exceed the server's own limit, defined by the
max_get_records_size parameter in
the server configuration.
Use has_more_records to see if more
records exist in the result to be fetched, and
offset & limit to request
subsequent pages of results. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of result_table . If
result_table_persist is
false (or unspecified), then this is
always allowed even if the caller does not have
permission to create tables. The generated name is
returned in
qualified_result_table_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema as part of
result_table and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema which is
to contain the table specified in
result_table . If the schema provided
is non-existent, it will be automatically created.
- 'expression': Filter expression to
apply to the table prior to computing the aggregate
group by.
- 'having': Filter expression to apply
to the aggregated results.
- 'sort_order': String indicating how
the returned values should be sorted - ascending or
descending.
Supported values:
- 'ascending': Indicates that the
returned values should be sorted in ascending
order.
- 'descending': Indicates that the
returned values should be sorted in descending
order.
The default value is 'ascending'.
- 'sort_by': String determining how the
results are sorted.
Supported values:
- 'key': Indicates that the returned
values should be sorted by key, which corresponds
to the grouping columns. If you have multiple
grouping columns (and are sorting by key), it will
first sort the first grouping column, then the
second grouping column, etc.
- 'value': Indicates that the returned
values should be sorted by value, which corresponds
to the aggregates. If you have multiple aggregates
(and are sorting by value), it will first sort by
the first aggregate, then the second aggregate,
etc.
The default value is 'value'.
- 'strategy_definition': The tier strategy for the table and
its columns.
- 'result_table': The name of a table
used to store the results, in
[schema_name.]table_name format, using standard name resolution rules and meeting
table naming criteria. Column
names (group-by and aggregate fields) need to be
given aliases e.g. ["FChar256 as fchar256",
"sum(FDouble) as sfd"]. If present, no results are
returned in the response. This option is not
available if one of the grouping attributes is an
unrestricted string (i.e.; not charN) type.
- 'result_table_persist': If
true , then the result table specified
in result_table will be persisted and
will not expire unless a ttl is
specified. If false , then the result
table will be an in-memory table and will expire
unless a ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'result_table_force_replicated': Force
the result table to be replicated (ignores any
sharding). Must be used in combination with the
result_table option.
Supported values:
The default value is 'false'.
- 'result_table_generate_pk': If
true then set a primary key for the
result table. Must be used in combination with the
result_table option.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL
of the table specified in
result_table .
- 'chunk_size': Indicates the number of
records per chunk to be used for the result table.
Must be used in combination with the
result_table option.
- 'create_indexes': Comma-separated list
of columns on which to create indexes on the result
table. Must be used in combination with the
result_table option.
- 'view_id': ID of view of which the
result table will be a member. The default value
is ''.
- 'pivot': pivot column
- 'pivot_values': The value list
provided will become the column headers in the
output. Should be the values from the pivot_column.
- 'grouping_sets': Customize the
grouping attribute sets to compute the aggregates.
These sets can include ROLLUP or CUBE operartors.
The attribute sets should be enclosed in
paranthesis and can include composite attributes.
All attributes specified in the grouping sets must
present in the groupby attributes.
- 'rollup': This option is used to
specify the multilevel aggregates.
- 'cube': This option is used to specify
the multidimensional aggregates.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_group_by_request(request, callback) → {Promise}
Calculates unique combinations (groups) of values for the given columns in a
given table or view and computes aggregates on each unique combination. This
is somewhat analogous to an SQL-style SELECT...GROUP BY.
For aggregation details and examples, see Aggregation. For
limitations, see Aggregation Limitations.
Any column(s) can be grouped on, and all column types except
unrestricted-length strings may be used for computing applicable aggregates;
columns marked as store-only are unable to be used in grouping or
aggregation.
The results can be paged via the offset
and limit
parameters. For example, to get 10 groups with the largest counts the inputs
would be: limit=10, options={"sort_order":"descending", "sort_by":"value"}.
options
can be used to customize behavior of this call e.g.
filtering or sorting the results.
To group by columns 'x' and 'y' and compute the number of objects within
each group, use: column_names=['x','y','count(*)'].
To also compute the sum of 'z' over each group, use:
column_names=['x','y','count(*)','sum(z)'].
Available aggregation functions are: count(*), sum, min, max, avg,
mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min,
arg_max and count_distinct.
Available grouping functions are Rollup, Cube, and Grouping Sets
This service also provides support for Pivot operations.
Filtering on aggregates is supported via expressions using aggregation functions supplied to having
.
The response is returned as a dynamic schema. For details see: dynamic schemas
documentation.
If a result_table
name is specified in the
options
, the results are stored in a new table with that
name--no results are returned in the response. Both the table name and
resulting column names must adhere to standard naming
conventions; column/aggregation expressions will need to be aliased. If
the source table's shard key is used as the grouping column(s) and all result
records are selected (offset
is 0 and limit
is
-9999), the result table will be sharded, in all other cases it will be
replicated. Sorting will properly function only if the result table is
replicated or if there is only one processing node and should not be relied
upon in other cases. Not available when any of the values of
column_names
is an unrestricted-length string.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_histogram(table_name, column_name, start, end, interval, options, callback) → {Promise}
Performs a histogram calculation given a table, a column, and an
interval function. The
interval
is used to produce bins of that
size
and the result, computed over the records falling within each bin, is
returned.
For each bin, the start value is inclusive, but the end value is
exclusive--except for the very last bin for which the end value is also
inclusive. The value returned for each bin is the number of records in it,
except when a column name is provided as a
value_column
. In this latter case the sum of the
values corresponding to the
value_column
is used as the
result instead. The total number of bins requested cannot exceed 10,000.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based)
build to service a request that specifies a value_column
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the operation will be
performed. Must be an existing table, in
[schema_name.]table_name format, using standard
name resolution rules. |
column_name |
String
|
Name of a column or an expression of one or
more column names over which the histogram will
be calculated. |
start |
Number
|
Lower end value of the histogram interval, inclusive. |
end |
Number
|
Upper end value of the histogram interval, inclusive. |
interval |
Number
|
The size of each bin within the start and end
parameters. |
options |
Object
|
Optional parameters.
- 'value_column': The name of the column
to use when calculating the bin values (values are
summed). The column must be a numerical type (int,
double, long, float).
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_histogram_request(request, callback) → {Promise}
Performs a histogram calculation given a table, a column, and an
interval function. The
interval
is used to produce bins of that
size
and the result, computed over the records falling within each bin, is
returned.
For each bin, the start value is inclusive, but the end value is
exclusive--except for the very last bin for which the end value is also
inclusive. The value returned for each bin is the number of records in it,
except when a column name is provided as a
value_column
. In this latter case the sum of the
values corresponding to the
value_column
is used as the
result instead. The total number of bins requested cannot exceed 10,000.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based)
build to service a request that specifies a value_column
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_k_means(table_name, column_names, k, tolerance, options, callback) → {Promise}
This endpoint runs the k-means algorithm - a heuristic algorithm
that attempts to do k-means clustering. An ideal k-means clustering
algorithm
selects k points such that the sum of the mean squared distances of each
member
of the set to the nearest of the k points is minimized. The k-means
algorithm
however does not necessarily produce such an ideal cluster. It begins with
a
randomly selected set of k points and then refines the location of the
points
iteratively and settles to a local minimum. Various parameters and options
are
provided to control the heuristic search.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based)
build to service this request.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the operation will be
performed. Must be an existing table, in
[schema_name.]table_name format, using standard
name resolution rules. |
column_names |
Array.<String>
|
List of column names on which the operation
would be performed. If n columns are
provided then each of the k result points
will have n dimensions corresponding to the
n columns. |
k |
Number
|
The number of mean points to be determined by the
algorithm. |
tolerance |
Number
|
Stop iterating when the distances between
successive points is less than the given
tolerance. |
options |
Object
|
Optional parameters.
- 'whiten': When set to 1 each of the
columns is first normalized by its stdv - default
is not to whiten.
- 'max_iters': Number of times to try to
hit the tolerance limit before giving up - default
is 10.
- 'num_tries': Number of times to run
the k-means algorithm with a different randomly
selected starting points - helps avoid local
minimum. Default is 1.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of result_table . If
result_table_persist is
false (or unspecified), then this is
always allowed even if the caller does not have
permission to create tables. The generated name is
returned in
qualified_result_table_name .
Supported values:
The default value is 'false'.
- 'result_table': The name of a table
used to store the results, in
[schema_name.]table_name format, using standard name resolution rules and meeting
table naming criteria. If this
option is specified, the results are not returned
in the response.
- 'result_table_persist': If
true , then the result table specified
in result_table will be persisted and
will not expire unless a ttl is
specified. If false , then the result
table will be an in-memory table and will expire
unless a ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL
of the table specified in
result_table .
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_k_means_request(request, callback) → {Promise}
This endpoint runs the k-means algorithm - a heuristic algorithm
that attempts to do k-means clustering. An ideal k-means clustering
algorithm
selects k points such that the sum of the mean squared distances of each
member
of the set to the nearest of the k points is minimized. The k-means
algorithm
however does not necessarily produce such an ideal cluster. It begins with
a
randomly selected set of k points and then refines the location of the
points
iteratively and settles to a local minimum. Various parameters and options
are
provided to control the heuristic search.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based)
build to service this request.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_min_max(table_name, column_name, options, callback) → {Promise}
Calculates and returns the minimum and maximum values of a particular column
in a table.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the operation will be
performed. Must be an existing table, in
[schema_name.]table_name format, using standard
name resolution rules. |
column_name |
String
|
Name of a column or an expression of one or
more column on which the min-max will be
calculated. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_min_max_geometry(table_name, column_name, options, callback) → {Promise}
Calculates and returns the minimum and maximum x- and y-coordinates
of a particular geospatial geometry column in a table.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the operation will be
performed. Must be an existing table, in
[schema_name.]table_name format, using standard
name resolution rules. |
column_name |
String
|
Name of a geospatial geometry column on which
the min-max will be calculated. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_min_max_geometry_request(request, callback) → {Promise}
Calculates and returns the minimum and maximum x- and y-coordinates
of a particular geospatial geometry column in a table.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_min_max_request(request, callback) → {Promise}
Calculates and returns the minimum and maximum values of a particular column
in a table.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_statistics(table_name, column_name, stats, options, callback) → {Promise}
Calculates the requested statistics of the given column(s) in a
given table.
The available statistics are:
count
(number of total objects),
mean
,
stdv
(standard deviation),
variance
,
skew
,
kurtosis
,
sum
,
min
,
max
,
weighted_average
,
cardinality
(unique count),
estimated_cardinality
,
percentile
, and
percentile_rank
.
Estimated cardinality is calculated by using the hyperloglog approximation
technique.
Percentiles and percentile ranks are approximate and are calculated using
the
t-digest algorithm. They must include the desired
percentile
/percentile_rank
.
To compute multiple percentiles each value must be specified separately
(i.e.
'percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)').
A second, comma-separated value can be added to the
percentile
statistic to calculate percentile
resolution, e.g., a 50th percentile with 200 resolution would be
'percentile(50,200)'.
The weighted average statistic requires a weight column to be specified in
weight_column_name
. The weighted average is then
defined as the sum of the products of column_name
times the
weight_column_name
values divided by the sum of the
weight_column_name
values.
Additional columns can be used in the calculation of statistics via
additional_column_names
. Values in these columns will
be included in the overall aggregate calculation--individual aggregates will
not
be calculated per additional column. For instance, requesting the
count
& mean
of
column_name
x and additional_column_names
y & z, where x holds the numbers 1-10, y holds 11-20, and z holds 21-30,
would
return the total number of x, y, & z values (30), and the single average
value
across all x, y, & z values (15.5).
The response includes a list of key/value pairs of each statistic requested
and
its corresponding value.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the statistics
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. |
column_name |
String
|
Name of the primary column for which the
statistics are to be calculated. |
stats |
String
|
Comma separated list of the statistics to calculate,
e.g. "sum,mean".
Supported values:
- 'count': Number of objects (independent
of the given column(s)).
- 'mean': Arithmetic mean (average),
equivalent to sum/count.
- 'stdv': Sample standard deviation
(denominator is count-1).
- 'variance': Unbiased sample variance
(denominator is count-1).
- 'skew': Skewness (third standardized
moment).
- 'kurtosis': Kurtosis (fourth
standardized moment).
- 'sum': Sum of all values in the
column(s).
- 'min': Minimum value of the column(s).
- 'max': Maximum value of the column(s).
- 'weighted_average': Weighted arithmetic
mean (using the option
weight_column_name as the weighting
column).
- 'cardinality': Number of unique values
in the column(s).
- 'estimated_cardinality': Estimate (via
hyperloglog technique) of the number of unique values
in the column(s).
- 'percentile': Estimate (via t-digest) of
the given percentile of the column(s)
(percentile(50.0) will be an approximation of the
median). Add a second, comma-separated value to
calculate percentile resolution, e.g.,
'percentile(75,150)'
- 'percentile_rank': Estimate (via
t-digest) of the percentile rank of the given value
in the column(s) (if the given value is the median of
the column(s), percentile_rank() will return
approximately 50.0).
|
options |
Object
|
Optional parameters.
- 'additional_column_names': A list of
comma separated column names over which statistics
can be accumulated along with the primary column.
All columns listed and
column_name
must be of the same type. Must not include the
column specified in column_name and no
column can be listed twice.
- 'weight_column_name': Name of column
used as weighting attribute for the weighted
average statistic.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_statistics_by_range(table_name, select_expression, column_name, value_column_name, stats, start, end, interval, options, callback) → {Promise}
Divides the given set into bins and calculates statistics of the
values of a value-column in each bin. The bins are based on the values of a
given binning-column. The statistics that may be requested are mean, stdv
(standard deviation), variance, skew, kurtosis, sum, min, max, first, last
and
weighted average. In addition to the requested statistics the count of total
samples in each bin is returned. This counts vector is just the histogram of
the
column used to divide the set members into bins. The weighted average
statistic
requires a weight column to be specified in
weight_column_name
. The weighted average is then
defined as the sum of the products of the value column times the weight
column
divided by the sum of the weight column.
There are two methods for binning the set members. In the first, which can
be
used for numeric valued binning-columns, a min, max and interval are
specified.
The number of bins, nbins, is the integer upper bound of (max-min)/interval.
Values that fall in the range [min+n*interval,min+(n+1)*interval) are placed
in
the nth bin where n ranges from 0..nbin-2. The final bin is
[min+(nbin-1)*interval,max]. In the second method,
bin_values
specifies a list of binning column values.
Binning-columns whose value matches the nth member of the
bin_values
list are placed in the nth bin. When a list
is provided, the binning-column must be of type string or int.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based)
build to service this request.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the ranged-statistics
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. |
select_expression |
String
|
For a non-empty expression statistics are
calculated for those records for which
the expression is true. |
column_name |
String
|
Name of the binning-column used to divide the
set samples into bins. |
value_column_name |
String
|
Name of the value-column for which
statistics are to be computed. |
stats |
String
|
A string of comma separated list of the statistics to
calculate, e.g. 'sum,mean'. Available statistics:
mean, stdv (standard deviation), variance, skew,
kurtosis, sum. |
start |
Number
|
The lower bound of the binning-column. |
end |
Number
|
The upper bound of the binning-column. |
interval |
Number
|
The interval of a bin. Set members fall into bin i
if the binning-column falls in the range
[start+interval*i, start+interval*(i+1)). |
options |
Object
|
Map of optional parameters:
- 'additional_column_names': A list of
comma separated value-column names over which
statistics can be accumulated along with the
primary value_column.
- 'bin_values': A list of comma
separated binning-column values. Values that match
the nth bin_values value are placed in the nth bin.
- 'weight_column_name': Name of the
column used as weighting column for the
weighted_average statistic.
- 'order_column_name': Name of the
column used for candlestick charting techniques.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_statistics_by_range_request(request, callback) → {Promise}
Divides the given set into bins and calculates statistics of the
values of a value-column in each bin. The bins are based on the values of a
given binning-column. The statistics that may be requested are mean, stdv
(standard deviation), variance, skew, kurtosis, sum, min, max, first, last
and
weighted average. In addition to the requested statistics the count of total
samples in each bin is returned. This counts vector is just the histogram of
the
column used to divide the set members into bins. The weighted average
statistic
requires a weight column to be specified in
weight_column_name
. The weighted average is then
defined as the sum of the products of the value column times the weight
column
divided by the sum of the weight column.
There are two methods for binning the set members. In the first, which can
be
used for numeric valued binning-columns, a min, max and interval are
specified.
The number of bins, nbins, is the integer upper bound of (max-min)/interval.
Values that fall in the range [min+n*interval,min+(n+1)*interval) are placed
in
the nth bin where n ranges from 0..nbin-2. The final bin is
[min+(nbin-1)*interval,max]. In the second method,
bin_values
specifies a list of binning column values.
Binning-columns whose value matches the nth member of the
bin_values
list are placed in the nth bin. When a list
is provided, the binning-column must be of type string or int.
NOTE: The Kinetica instance being accessed must be running a CUDA
(GPU-based)
build to service this request.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_statistics_request(request, callback) → {Promise}
Calculates the requested statistics of the given column(s) in a
given table.
The available statistics are:
count
(number of total objects),
mean
,
stdv
(standard deviation),
variance
,
skew
,
kurtosis
,
sum
,
min
,
max
,
weighted_average
,
cardinality
(unique count),
estimated_cardinality
,
percentile
, and
percentile_rank
.
Estimated cardinality is calculated by using the hyperloglog approximation
technique.
Percentiles and percentile ranks are approximate and are calculated using
the
t-digest algorithm. They must include the desired
percentile
/percentile_rank
.
To compute multiple percentiles each value must be specified separately
(i.e.
'percentile(75.0),percentile(99.0),percentile_rank(1234.56),percentile_rank(-5)').
A second, comma-separated value can be added to the
percentile
statistic to calculate percentile
resolution, e.g., a 50th percentile with 200 resolution would be
'percentile(50,200)'.
The weighted average statistic requires a weight column to be specified in
weight_column_name
. The weighted average is then
defined as the sum of the products of column_name
times the
weight_column_name
values divided by the sum of the
weight_column_name
values.
Additional columns can be used in the calculation of statistics via
additional_column_names
. Values in these columns will
be included in the overall aggregate calculation--individual aggregates will
not
be calculated per additional column. For instance, requesting the
count
& mean
of
column_name
x and additional_column_names
y & z, where x holds the numbers 1-10, y holds 11-20, and z holds 21-30,
would
return the total number of x, y, & z values (30), and the single average
value
across all x, y, & z values (15.5).
The response includes a list of key/value pairs of each statistic requested
and
its corresponding value.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_unique(table_name, column_name, offset, limit, options, callback) → {Promise}
Returns all the unique values from a particular column
(specified by
column_name
) of a particular table or view
(specified by
table_name
). If
column_name
is a
numeric column,
the values will be in
binary_encoded_response
. Otherwise if
column_name
is a string column, the values will be in
json_encoded_response
. The results can be paged via
offset
and
limit
parameters.
Columns marked as store-only
are unable to be used with this function.
To get the first 10 unique values sorted in descending order
options
would be::
{"limit":"10","sort_order":"descending"}.
The response is returned as a dynamic schema. For details see:
dynamic
schemas documentation.
If a result_table
name is specified in the
options
, the results are stored in a new table with that
name--no
results are returned in the response. Both the table name and resulting
column
name must adhere to
standard naming
conventions;
any column expression will need to be aliased. If the source table's
shard key
is used as the
column_name
, the result table will be sharded, in all other
cases it
will be replicated. Sorting will properly function only if the result table
is
replicated or if there is only one processing node and should not be relied
upon
in other cases. Not available if the value of column_name
is
an
unrestricted-length string.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of an existing table or view on which the
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. |
column_name |
String
|
Name of the column or an expression containing
one or more column names on which the unique
function would be applied. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned, or
END_OF_SET (-9999) to indicate that the maximum
number of results allowed by the server should be
returned. The number of records returned will never
exceed the server's own limit, defined by the
max_get_records_size parameter in
the server configuration.
Use has_more_records to see if more
records exist in the result to be fetched, and
offset & limit to request
subsequent pages of results. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of result_table . If
result_table_persist is
false (or unspecified), then this is
always allowed even if the caller does not have
permission to create tables. The generated name is
returned in
qualified_result_table_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema as part of
result_table and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema which is
to contain the table specified in
result_table . If the schema provided
is non-existent, it will be automatically created.
- 'expression': Optional filter
expression to apply to the table.
- 'sort_order': String indicating how
the returned values should be sorted.
Supported values:
The default value is 'ascending'.
- 'result_table': The name of the table
used to store the results, in
[schema_name.]table_name format, using standard name resolution rules and meeting
table naming criteria. If
present, no results are returned in the response.
Not available if
column_name is an
unrestricted-length string.
- 'result_table_persist': If
true , then the result table specified
in result_table will be persisted and
will not expire unless a ttl is
specified. If false , then the result
table will be an in-memory table and will expire
unless a ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'result_table_force_replicated': Force
the result table to be replicated (ignores any
sharding). Must be used in combination with the
result_table option.
Supported values:
The default value is 'false'.
- 'result_table_generate_pk': If
true then set a primary key for the
result table. Must be used in combination with the
result_table option.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL
of the table specified in
result_table .
- 'chunk_size': Indicates the number of
records per chunk to be used for the result table.
Must be used in combination with the
result_table option.
- 'view_id': ID of view of which the
result table will be a member. The default value
is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_unique_request(request, callback) → {Promise}
Returns all the unique values from a particular column
(specified by
column_name
) of a particular table or view
(specified by
table_name
). If
column_name
is a
numeric column,
the values will be in
binary_encoded_response
. Otherwise if
column_name
is a string column, the values will be in
json_encoded_response
. The results can be paged via
offset
and
limit
parameters.
Columns marked as store-only
are unable to be used with this function.
To get the first 10 unique values sorted in descending order
options
would be::
{"limit":"10","sort_order":"descending"}.
The response is returned as a dynamic schema. For details see:
dynamic
schemas documentation.
If a result_table
name is specified in the
options
, the results are stored in a new table with that
name--no
results are returned in the response. Both the table name and resulting
column
name must adhere to
standard naming
conventions;
any column expression will need to be aliased. If the source table's
shard key
is used as the
column_name
, the result table will be sharded, in all other
cases it
will be replicated. Sorting will properly function only if the result table
is
replicated or if there is only one processing node and should not be relied
upon
in other cases. Not available if the value of column_name
is
an
unrestricted-length string.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_unpivot(table_name, column_names, variable_column_name, value_column_name, pivoted_columns, options, callback) → {Promise}
Rotate the column values into rows values.
For unpivot details and examples, see
Unpivot. For
limitations, see
Unpivot
Limitations.
Unpivot is used to normalize tables that are built for cross tabular
reporting
purposes. The unpivot operator rotates the column values for all the pivoted
columns. A variable column, value column and all columns from the source
table
except the unpivot columns are projected into the result table. The variable
column and value columns in the result table indicate the pivoted column
name
and values respectively.
The response is returned as a dynamic schema. For details see:
dynamic
schemas documentation.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the operation will be
performed. Must be an existing table/view, in
[schema_name.]table_name format, using standard
name resolution rules. |
column_names |
Array.<String>
|
List of column names or expressions. A
wildcard '*' can be used to include all the
non-pivoted columns from the source table. |
variable_column_name |
String
|
Specifies the variable/parameter
column name. |
value_column_name |
String
|
Specifies the value column name. |
pivoted_columns |
Array.<String>
|
List of one or more values typically the
column names of the input table. All the
columns in the source table must have the
same data type. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of result_table . If
result_table_persist is
false (or unspecified), then this is
always allowed even if the caller does not have
permission to create tables. The generated name is
returned in
qualified_result_table_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema as part of
result_table and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema which is
to contain the table specified in
result_table . If the schema is
non-existent, it will be automatically created.
- 'result_table': The name of a table
used to store the results, in
[schema_name.]table_name format, using standard name resolution rules and meeting
table naming criteria. If
present, no results are returned in the response.
- 'result_table_persist': If
true , then the result table specified
in result_table will be persisted and
will not expire unless a ttl is
specified. If false , then the result
table will be an in-memory table and will expire
unless a ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'expression': Filter expression to
apply to the table prior to unpivot processing.
- 'order_by': Comma-separated list of
the columns to be sorted by; e.g. 'timestamp asc, x
desc'. The columns specified must be present in
input table. If any alias is given for any column
name, the alias must be used, rather than the
original column name. The default value is ''.
- 'chunk_size': Indicates the number of
records per chunk to be used for the result table.
Must be used in combination with the
result_table option.
- 'limit': The number of records to
keep. The default value is ''.
- 'ttl': Sets the TTL
of the table specified in
result_table .
- 'view_id': view this result table is
part of. The default value is ''.
- 'create_indexes': Comma-separated list
of columns on which to create indexes on the table
specified in
result_table . The columns
specified must be present in output column names.
If any alias is given for any column name, the
alias must be used, rather than the original column
name.
- 'result_table_force_replicated': Force
the result table to be replicated (ignores any
sharding). Must be used in combination with the
result_table option.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
aggregate_unpivot_request(request, callback) → {Promise}
Rotate the column values into rows values.
For unpivot details and examples, see
Unpivot. For
limitations, see
Unpivot
Limitations.
Unpivot is used to normalize tables that are built for cross tabular
reporting
purposes. The unpivot operator rotates the column values for all the pivoted
columns. A variable column, value column and all columns from the source
table
except the unpivot columns are projected into the result table. The variable
column and value columns in the result table indicate the pivoted column
name
and values respectively.
The response is returned as a dynamic schema. For details see:
dynamic
schemas documentation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_credential(credential_name, credential_updates_map, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
credential_name |
String
|
Name of the credential to be altered. Must
be an existing credential. |
credential_updates_map |
Object
|
Map containing the properties of the
credential to be updated. Error if
empty.
- 'type': New type for
the credential.
Supported values:
- 'aws_access_key'
- 'aws_iam_role'
- 'azure_ad'
- 'azure_oauth'
- 'azure_sas'
- 'azure_storage_key'
- 'docker'
-
'gcs_service_account_id'
-
'gcs_service_account_keys'
- 'hdfs'
- 'kafka'
- 'identity': New user
for the credential
- 'secret': New password
for the credential
- 'schema_name': Updates
the schema name. If
schema_name
doesn't exist, an error will be
thrown. If schema_name
is empty, then the user's
default schema will be used.
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_credential_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_datasink(name, datasink_updates_map, options, callback) → {Promise}
Alters the properties of an existing
data sink
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the data sink to be altered. Must be an
existing data sink. |
datasink_updates_map |
Object
|
Map containing the properties of the
data sink to be updated. Error if
empty.
- 'destination':
Destination for the output data in
format
'destination_type://path[:port]'.
Supported destination types are
'http', 'https' and 'kafka'.
- 'connection_timeout':
Timeout in seconds for connecting to
this sink
- 'wait_timeout': Timeout
in seconds for waiting for a response
from this sink
- 'credential': Name of the
credential object to
be used in this data sink
- 's3_bucket_name': Name of
the Amazon S3 bucket to use as the
data sink
- 's3_region': Name of the
Amazon S3 region where the given
bucket is located
- 's3_aws_role_arn': Amazon
IAM Role ARN which has required S3
permissions that can be assumed for
the given S3 IAM user
- 'hdfs_kerberos_keytab':
Kerberos keytab file location for the
given HDFS user. This may be a KIFS
file.
- 'hdfs_delegation_token':
Delegation token for the given HDFS
user
- 'hdfs_use_kerberos': Use
kerberos authentication for the given
HDFS cluster
Supported values:
The default value is 'false'.
-
'azure_storage_account_name': Name of
the Azure storage account to use as
the data sink, this is valid only if
tenant_id is specified
- 'azure_container_name':
Name of the Azure storage container to
use as the data sink
- 'azure_tenant_id': Active
Directory tenant ID (or directory ID)
- 'azure_sas_token': Shared
access signature token for Azure
storage account to use as the data
sink
- 'azure_oauth_token':
Oauth token to access given storage
container
- 'gcs_bucket_name': Name
of the Google Cloud Storage bucket to
use as the data sink
- 'gcs_project_id': Name of
the Google Cloud project to use as the
data sink
-
'gcs_service_account_keys': Google
Cloud service account keys to use for
authenticating the data sink
- 'kafka_url': The
publicly-accessible full path URL to
the kafka broker, e.g.,
'http://172.123.45.67:9300'.
- 'kafka_topic_name': Name
of the Kafka topic to use for this
data sink, if it references a Kafka
broker
- 'anonymous': Create an
anonymous connection to the storage
provider--DEPRECATED: this is now the
default. Specify
use_managed_credentials for
non-anonymous connection
Supported values:
The default value is 'true'.
-
'use_managed_credentials': When no
credentials are supplied, we use
anonymous access by default. If this
is set, we will use cloud provider
user settings.
Supported values:
The default value is 'false'.
- 'use_https': Use https to
connect to datasink if true, otherwise
use http
Supported values:
The default value is 'true'.
- 'max_batch_size': Maximum
number of records per notification
message. The default value is '1'.
- 'max_message_size':
Maximum size in bytes of each
notification message. The default
value is '1000000'.
- 'json_format': The
desired format of JSON encoded
notifications message.
If
nested , records are
returned as an array.
Otherwise, only a single record per
messages is returned.
Supported values:
The default value is 'flat'.
- 'skip_validation': Bypass
validation of connection to this data
sink.
Supported values:
The default value is 'false'.
- 'schema_name': Updates
the schema name. If
schema_name
doesn't exist, an error will be
thrown. If schema_name is
empty, then the user's
default schema will be used.
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_datasink_request(request, callback) → {Promise}
Alters the properties of an existing
data sink
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_datasource(name, datasource_updates_map, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the data source to be altered. Must be an
existing data source. |
datasource_updates_map |
Object
|
Map containing the properties of the
data source to be updated. Error if
empty.
- 'location': Location of
the remote storage in
'storage_provider_type://[storage_path[:storage_port]]'
format.
Supported storage provider types are
'azure','gcs','hdfs','kafka' and
's3'.
- 'user_name': Name of
the remote system user; may be an
empty string
- 'password': Password
for the remote system user; may be
an empty string
- 'skip_validation':
Bypass validation of connection to
remote source.
Supported values:
The default value is 'false'.
- 'connection_timeout':
Timeout in seconds for connecting to
this storage provider
- 'wait_timeout': Timeout
in seconds for reading from this
storage provider
- 'credential': Name of
the credential object
to be used in data source
- 's3_bucket_name': Name
of the Amazon S3 bucket to use as
the data source
- 's3_region': Name of
the Amazon S3 region where the given
bucket is located
- 's3_aws_role_arn':
Amazon IAM Role ARN which has
required S3 permissions that can be
assumed for the given S3 IAM user
-
's3_encryption_customer_algorithm':
Customer encryption algorithm used
encrypting data
-
's3_encryption_customer_key':
Customer encryption key to encrypt
or decrypt data
- 'hdfs_kerberos_keytab':
Kerberos keytab file location for
the given HDFS user. This may be a
KIFS file.
-
'hdfs_delegation_token': Delegation
token for the given HDFS user
- 'hdfs_use_kerberos':
Use kerberos authentication for the
given HDFS cluster
Supported values:
The default value is 'false'.
-
'azure_storage_account_name': Name
of the Azure storage account to use
as the data source, this is valid
only if tenant_id is specified
- 'azure_container_name':
Name of the Azure storage container
to use as the data source
- 'azure_tenant_id':
Active Directory tenant ID (or
directory ID)
- 'azure_sas_token':
Shared access signature token for
Azure storage account to use as the
data source
- 'azure_oauth_token':
OAuth token to access given storage
container
- 'gcs_bucket_name': Name
of the Google Cloud Storage bucket
to use as the data source
- 'gcs_project_id': Name
of the Google Cloud project to use
as the data source
-
'gcs_service_account_keys': Google
Cloud service account keys to use
for authenticating the data source
- 'kafka_url': The
publicly-accessible full path URL to
the Kafka broker, e.g.,
'http://172.123.45.67:9300'.
- 'kafka_topic_name':
Name of the Kafka topic to use as
the data source
- 'jdbc_driver_jar_path':
JDBC driver jar file location. This
may be a KIFS file.
-
'jdbc_driver_class_name': Name of
the JDBC driver class
- 'anonymous': Create an
anonymous connection to the storage
provider--DEPRECATED: this is now
the default. Specify
use_managed_credentials for
non-anonymous connection
Supported values:
The default value is 'true'.
-
'use_managed_credentials': When no
credentials are supplied, we use
anonymous access by default. If
this is set, we will use cloud
provider user settings.
Supported values:
The default value is 'false'.
- 'use_https': Use https
to connect to datasource if true,
otherwise use http
Supported values:
The default value is 'true'.
- 'schema_name': Updates
the schema name. If
schema_name
doesn't exist, an error will be
thrown. If schema_name
is empty, then the user's
default schema will be used.
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_datasource_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_directory(directory_name, directory_updates_map, options, callback) → {Promise}
Alters an existing directory in
KiFS.
Parameters:
Name |
Type |
Description |
directory_name |
String
|
Name of the directory in KiFS to be altered. |
directory_updates_map |
Object
|
Map containing the properties of the
directory to be altered. Error if
empty.
- 'data_limit': The
maximum capacity, in bytes, to apply
to the directory. Set to -1 to
indicate no upper limit.
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_directory_request(request, callback) → {Promise}
Alters an existing directory in
KiFS.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_environment(environment_name, action, value, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
environment_name |
String
|
Name of the environment to be altered. |
action |
String
|
Modification operation to be applied
Supported values:
- 'install_package': Install a python
package from PyPI, an external data source or KiFS
- 'install_requirements': Install
packages from a requirements file
- 'uninstall_package': Uninstall a python
package.
- 'uninstall_requirements': Uninstall
packages from a requirements file
- 'reset': Uninstalls all packages in the
environment and resets it to the original state at
time of creation
- 'rebuild': Recreates the environment
and re-installs all packages, upgrades the packages
if necessary based on dependencies
|
value |
String
|
The value of the modification, depending on
action . For example, if
action is install_package ,
this would be the python package name.
If action is
install_requirements , this would be the
path of a requirements file from which to install
packages.
If an external data source is specified in
datasource_name , this can be the path to
a wheel file or source archive.
Alternatively, if installing from a file (wheel or
source archive), the value may be a reference to a
file in KiFS. |
options |
Object
|
Optional parameters.
- 'datasource_name': Name of an existing
external data source from which packages specified
in
value can be loaded
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_environment_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_resource_group(name, tier_attributes, ranking, adjoining_resource_group, options, callback) → {Promise}
Alters the properties of an exisiting resource group to facilitate resource
management.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the group to be altered. Must be an existing
resource group name or an empty string when used
inconjunction with the is_default_group option. |
tier_attributes |
Object
|
Optional map containing tier names and
their respective attribute group limits.
The only valid attribute limit that can be
set is max_memory (in bytes) for the VRAM &
RAM tiers.
For instance, to set max VRAM capacity to
1GB and max RAM capacity to 10GB, use:
{'VRAM':{'max_memory':'1000000000'},
'RAM':{'max_memory':'10000000000'}}
- 'max_memory': Maximum amount
of memory usable in the given tier at one
time for this group.
|
ranking |
String
|
If the resource group ranking is to be updated,
this indicates the relative ranking among existing
resource groups where this resource group will be
moved; leave blank if not changing the ranking.
When using before or
after , specify which resource group
this one will be inserted before or after in
adjoining_resource_group .
Supported values:
- ''
- 'first'
- 'last'
- 'before'
- 'after'
The default value is ''. |
adjoining_resource_group |
String
|
If ranking is
before or
after , this field
indicates the resource group
before or after which the current
group will be placed; otherwise,
leave blank. |
options |
Object
|
Optional parameters.
- 'max_cpu_concurrency': Maximum number
of simultaneous threads that will be used to
execute a request for this group.
- 'max_data': Maximum amount of
cumulative ram usage regardless of tier status for
this group.
- 'max_scheduling_priority': Maximum
priority of a scheduled task for this group.
- 'max_tier_priority': Maximum priority
of a tiered object for this group.
- 'is_default_group': If
true , this request applies to the
global default resource group. It is an error for
this field to be true when the
name field is also populated.
Supported values:
The default value is 'false'.
- 'persist': If
true and a
system-level change was requested, the system
configuration will be written to disk upon
successful application of this request. This will
commit the changes from this request and any
additional in-memory modifications.
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_resource_group_request(request, callback) → {Promise}
Alters the properties of an exisiting resource group to facilitate resource
management.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_role(name, action, value, options, callback) → {Promise}
Alters a Role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the role to be altered. Must be an existing
role. |
action |
String
|
Modification operation to be applied to the role.
Supported values:
- 'set_resource_group': Sets the resource
group for an internal role. The resource group must
exist, otherwise, an empty string assigns the role
to the default resource group.
|
value |
String
|
The value of the modification, depending on
action . |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_role_request(request, callback) → {Promise}
Alters a Role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_schema(schema_name, action, value, options, callback) → {Promise}
Used to change the name of a SQL-style
schema, specified in
schema_name
.
Parameters:
Name |
Type |
Description |
schema_name |
String
|
Name of the schema to be altered. |
action |
String
|
Modification operation to be applied
Supported values:
- 'rename_schema': Renames a schema to
value . Has the same naming restrictions
as tables.
|
value |
String
|
The value of the modification, depending on
action . For now the only value of
action is rename_schema .
In this case the value is the new name of the schema. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_schema_request(request, callback) → {Promise}
Used to change the name of a SQL-style
schema, specified in
schema_name
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_system_properties(property_updates_map, options, callback) → {Promise}
The
GPUdb#alter_system_properties
endpoint is primarily used
to simplify the testing of the system and is not expected to be used during
normal execution. Commands are given through the
property_updates_map
whose keys are commands and values are
strings representing integer values (for example '8000') or boolean values
('true' or 'false').
Parameters:
Name |
Type |
Description |
property_updates_map |
Object
|
Map containing the properties of the
system to be updated. Error if empty.
- 'sm_omp_threads': Set the
number of OpenMP threads that will be
used to service filter & aggregation
requests to the specified integer
value.
- 'kernel_omp_threads': Set
the number of kernel OpenMP threads to
the specified integer value.
-
'concurrent_kernel_execution': Enables
concurrent kernel execution if the
value is
true and
disables it if the value is
false .
Supported values:
-
'subtask_concurrency_limit': Sets the
maximum number of simultaneous threads
allocated to a given request, on each
rank. Note that thread allocation may
also be limted by resource group
limits and/or system load.
- 'chunk_size': Sets the
number of records per chunk to be used
for all new tables.
- 'evict_columns': Attempts
to evict columns from memory to the
persistent store. Value string is a
semicolon separated list of entries,
each entry being a table name
optionally followed by a comma and a
comma separated list of column names
to attempt to evict. An empty value
string will attempt to evict all
tables and columns.
- 'execution_mode': Sets
the execution_mode for kernel
executions to the specified string
value. Possible values are host,
device, default (engine decides) or an
integer value that indicates max chunk
size to exec on host
-
'external_files_directory': Sets the
root directory path where external
table data files are accessed from.
Path must exist on the head node
- 'flush_to_disk': Flushes
any changes to any tables to the
persistent store. These changes
include updates to the vector store,
object store, and text search store,
Value string is ignored
- 'clear_cache': Clears
cached results. Useful to allow
repeated timing of endpoints. Value
string is the name of the table for
which to clear the cached results, or
an empty string to clear the cached
results for all tables.
- 'communicator_test':
Invoke the communicator test and
report timing results. Value string is
a semicolon separated list of
[key]=[value] expressions.
Expressions are:
num_transactions=[num] where num is
the number of request reply
transactions to invoke per test;
message_size=[bytes] where bytes is
the size in bytes of the messages to
send; check_values=[enabled] where if
enabled is true the value of the
messages received are verified.
- 'network_speed': Invoke
the network speed test and report
timing results. Value string is a
semicolon-separated list of
[key]=[value] expressions. Valid
expressions are: seconds=[time] where
time is the time in seconds to run the
test; data_size=[bytes] where bytes is
the size in bytes of the block to be
transferred; threads=[number of
threads]; to_ranks=[space-separated
list of ranks] where the list of ranks
is the ranks that rank 0 will send
data to and get data from. If to_ranks
is unspecified then all worker ranks
are used.
- 'request_timeout': Number
of minutes after which filtering
(e.g.,
GPUdb#filter ) and
aggregating (e.g.,
GPUdb#aggregate_group_by )
queries will timeout. The default
value is '20'.
- 'max_get_records_size':
The maximum number of records the
database will serve for a given data
retrieval call. The default value is
'20000'.
- 'max_grbc_batch_size':
- 'enable_audit': Enable or
disable auditing.
- 'audit_headers': Enable
or disable auditing of request
headers.
- 'audit_body': Enable or
disable auditing of request bodies.
- 'audit_data': Enable or
disable auditing of request data.
- 'audit_response': Enable
or disable auditing of response
information.
- 'shadow_agg_size': Size
of the shadow aggregate chunk cache in
bytes. The default value is
'10000000'.
- 'shadow_filter_size':
Size of the shadow filter chunk cache
in bytes. The default value is
'10000000'.
-
'synchronous_compression': compress
vector on set_compression (instead of
waiting for background thread). The
default value is 'false'.
-
'enable_overlapped_equi_join': Enable
overlapped-equi-join filter. The
default value is 'true'.
- 'kafka_batch_size':
Maximum number of records to be
ingested in a single batch. The
default value is '1000'.
- 'kafka_poll_timeout':
Maximum time (milliseconds) for each
poll to get records from kafka. The
default value is '0'.
- 'kafka_wait_time':
Maximum time (seconds) to buffer
records received from kafka before
ingestion. The default value is '30'.
-
'egress_parquet_compression': Parquet
file compression type
Supported values:
- 'uncompressed'
- 'snappy'
- 'gzip'
The default value is 'snappy'.
-
'egress_single_file_max_size': Max
file size (in MB) to allow saving to a
single file. May be overridden by
target limitations. The default value
is '10000'.
- 'max_concurrent_kernels':
Sets the max_concurrent_kernels value
of the conf.
- 'tcs_per_tom': Sets the
tcs_per_tom value of the conf.
- 'tps_per_tom': Sets the
tps_per_tom value of the conf.
- 'ai_api_provider': AI API
provider type
- 'ai_api_url': AI API URL
- 'ai_api_key': AI API key
-
'ai_api_connection_timeout': AI API
connection timeout in seconds
-
'postgres_proxy_idle_connection_timeout':
Idle connection timeout in seconds
-
'postgres_proxy_keep_alive': Enable
postgres proxy keep alive. The
default value is 'false'.
|
options |
Object
|
Optional parameters.
- 'evict_to_cold': If
true
and evict_columns is specified, the given objects
will be evicted to cold storage (if such a tier
exists).
Supported values:
- 'persist': If
true the
system configuration will be written to disk upon
successful application of this request. This will
commit the changes from this request and any
additional in-memory modifications.
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_system_properties_request(request, callback) → {Promise}
The
GPUdb#alter_system_properties
endpoint is primarily used
to simplify the testing of the system and is not expected to be used during
normal execution. Commands are given through the
property_updates_map
whose keys are commands and values are
strings representing integer values (for example '8000') or boolean values
('true' or 'false').
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_table(table_name, action, value, options, callback) → {Promise}
Apply various modifications to a table or view. The
available modifications include the following:
Manage a table's columns--a column can be added, removed, or have its
type and properties
modified, including whether it is
dictionary
encoded or not.
External tables cannot be modified except for their refresh method.
Create or delete a column,
chunk
skip, or
geospatial index. This can speed up
certain operations when using expressions containing equality or relational
operators on indexed columns. This only applies to tables.
Create or delete a foreign key
on a particular column.
Manage a
range-partitioned or a
manual list-partitioned
table's partitions.
Set (or reset) the tier strategy
of a table or view.
Refresh and manage the refresh mode of a
materialized
view or an
external
table.
Set the time-to-live
(TTL). This can be applied
to tables or views.
Set the global access mode (i.e. locking) for a table. This setting trumps
any
role-based access controls that may be in place; e.g., a user with write
access
to a table marked read-only will not be able to insert records into it. The
mode
can be set to read-only, write-only, read/write, and no access.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Table on which the operation will be performed,
in [schema_name.]table_name format,
using standard name resolution rules.
Must be an existing table or view. |
action |
String
|
Modification operation to be applied
Supported values:
- 'allow_homogeneous_tables': No longer
supported; action will be ignored.
- 'create_index': Creates a column (attribute) index,
chunk skip index, or
geospatial index
(depending on the specified
index_type ), on the column name
specified in value .
If this column already has the specified index, an
error will be returned.
- 'delete_index': Deletes a column (attribute) index,
chunk skip index, or
geospatial index
(depending on the specified
index_type ), on the column name
specified in value .
If this column does not have the specified index, an
error will be returned.
- 'move_to_collection':
[DEPRECATED--please use
move_to_schema
and use GPUdb#create_schema to create
the schema if non-existent] Moves a table or view
into a schema named value . If the
schema provided is non-existent, it will be
automatically created.
- 'move_to_schema': Moves a table or view
into a schema named
value .
If the schema provided is nonexistent, an error will
be thrown.
If value is empty, then the table or
view will be placed in the user's default schema.
- 'protected': No longer used.
Previously set whether the given
table_name should be protected or not.
The value would have been either 'true'
or 'false'.
- 'rename_table': Renames a table or view
within its current schema to
value . Has
the same naming restrictions as tables.
- 'ttl': Sets the time-to-live in minutes of the
table or view specified in
table_name .
- 'add_column': Adds the column specified
in
value to the table specified in
table_name .
Use column_type and
column_properties in
options
to set the column's type and properties,
respectively.
- 'change_column': Changes type and
properties of the column specified in
value .
Use column_type and
column_properties in
options to set
the column's type and properties, respectively. Note
that primary key and/or shard key columns cannot be
changed.
All unchanging column properties must be listed for
the change to take place, e.g., to add dictionary
encoding to
an existing 'char4' column, both 'char4' and 'dict'
must be specified in the options map.
- 'set_column_compression': No longer
supported; action will be ignored.
- 'delete_column': Deletes the column
specified in
value from the table
specified in table_name .
- 'create_foreign_key': Creates a foreign key specified in
value using the format
'(source_column_name [, ...]) references
target_table_name(primary_key_column_name [, ...])
[as foreign_key_name]'.
- 'delete_foreign_key': Deletes a foreign key. The
value should be the foreign_key_name
specified when creating the key or the complete
string used to define it.
- 'add_partition': Adds the partition
specified in
value , to either a range-partitioned or manual list-partitioned table.
- 'remove_partition': Removes the
partition specified in
value (and
relocates all of its data to the default partition)
from either a range-partitioned or manual list-partitioned table.
- 'delete_partition': Deletes the
partition specified in
value (and all
of its data) from either a range-partitioned or manual list-partitioned table.
- 'set_global_access_mode': Sets the
global access mode (i.e. locking) for the table
specified in
table_name . Specify the
access mode in value . Valid modes are
'no_access', 'read_only', 'write_only' and
'read_write'.
- 'refresh': For a materialized view, replays all the
table creation commands required to create the view.
For an external table, reloads all data
in the table from its associated source files or data source.
- 'set_refresh_method': For a materialized view, sets the method
by which the view is refreshed to the method
specified in
value - one of 'manual',
'periodic', or 'on_change'. For an external table, sets the method by
which the table is refreshed to the method specified
in value - either 'manual' or
'on_start'.
- 'set_refresh_start_time': Sets the time
to start periodic refreshes of this materialized view to the datetime
string specified in
value with format
'YYYY-MM-DD HH:MM:SS'. Subsequent refreshes occur
at the specified time + N * the refresh period.
- 'set_refresh_stop_time': Sets the time
to stop periodic refreshes of this materialized view to the datetime
string specified in
value with format
'YYYY-MM-DD HH:MM:SS'.
- 'set_refresh_period': Sets the time
interval in seconds at which to refresh this materialized view to the value
specified in
value . Also, sets the
refresh method to periodic if not already set.
- 'set_refresh_span': Sets the future
time-offset(in seconds) for the view refresh to
stop.
- 'set_refresh_execute_as': Sets the user
name to refresh this materialized view to the value
specified in
value .
- 'remove_text_search_attributes':
Removes text search attribute from all
columns.
- 'remove_shard_keys': Removes the shard
key property from all columns, so that the table
will be considered randomly sharded. The data is
not moved. The
value is ignored.
- 'set_strategy_definition': Sets the tier strategy for the table and
its columns to the one specified in
value , replacing the existing tier
strategy in its entirety.
- 'cancel_datasource_subscription':
Permanently unsubscribe a data source that is
loading continuously as a stream. The data source
can be Kafka / S3 / Azure.
- 'pause_datasource_subscription':
Temporarily unsubscribe a data source that is
loading continuously as a stream. The data source
can be Kafka / S3 / Azure.
- 'resume_datasource_subscription':
Resubscribe to a paused data source subscription.
The data source can be Kafka / S3 / Azure.
- 'change_owner': Change the owner
resource group of the table.
|
value |
String
|
The value of the modification, depending on
action .
For example, if action is
add_column , this would be the column
name;
while the column's definition would be covered by the
column_type ,
column_properties ,
column_default_value ,
and add_column_expression in
options .
If action is ttl , it would
be the number of minutes for the new TTL.
If action is refresh , this
field would be blank. |
options |
Object
|
Optional parameters.
- 'action':
- 'column_name':
- 'table_name':
- 'column_default_value': When adding a
column, set a default value for existing records.
For nullable columns, the default value will be
null, regardless of data type.
- 'column_properties': When adding or
changing a column, set the column properties
(strings, separated by a comma: data, store_only,
text_search, char8, int8 etc).
- 'column_type': When adding or changing
a column, set the column type (strings, separated
by a comma: int, double, string, null etc).
- 'compression_type': No longer
supported; option will be ignored.
Supported values:
- 'none'
- 'snappy'
- 'lz4'
- 'lz4hc'
The default value is 'snappy'.
- 'copy_values_from_column':
[DEPRECATED--please use
add_column_expression instead.]
- 'rename_column': When changing a
column, specify new column name.
- 'validate_change_column': When
changing a column, validate the change before
applying it (or not).
Supported values:
- 'true': Validate all values. A value
too large (or too long) for the new type will
prevent any change.
- 'false': When a value is too large or
long, it will be truncated.
The default value is 'true'.
- 'update_last_access_time': Indicates
whether the time-to-live (TTL) expiration
countdown timer should be reset to the table's TTL.
Supported values:
- 'true': Reset the expiration countdown
timer to the table's configured TTL.
- 'false': Don't reset the timer;
expiration countdown will continue from where it
is, as if the table had not been accessed.
The default value is 'true'.
- 'add_column_expression': When adding a
column, an optional expression to use for the new
column's values. Any valid expression may be used,
including one containing references to existing
columns in the same table.
- 'strategy_definition': Optional
parameter for specifying the tier strategy for the table and
its columns when
action is
set_strategy_definition , replacing the
existing tier strategy in its entirety.
- 'index_type': Type of index to create,
when
action is
create_index ,
or to delete, when action is
delete_index .
Supported values:
The default value is 'column'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_table_columns(table_name, column_alterations, options, callback) → {Promise}
Apply various modifications to columns in a table, view. The available
modifications include the following:
Create or delete an index on a
particular column. This can speed up certain operations when using
expressions
containing equality or relational operators on indexed columns. This only
applies to tables.
Manage a table's columns--a column can be added, removed, or have its
type and properties
modified, including whether it is
dictionary
encoded or not.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Table on which the operation will be performed.
Must be an existing table or view, in
[schema_name.]table_name format, using standard
name resolution rules. |
column_alterations |
Array.<Object>
|
List of alter table add/delete/change
column requests - all for the same
table. Each request is a map that
includes 'column_name', 'action' and
the options specific for the action.
Note that the same options as in alter
table requests but in the same map as
the column name and the action. For
example:
[{'column_name':'col_1','action':'change_column','rename_column':'col_2'},{'column_name':'col_1','action':'add_column',
'type':'int','default_value':'1'}] |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_table_columns_request(request, callback) → {Promise}
Apply various modifications to columns in a table, view. The available
modifications include the following:
Create or delete an index on a
particular column. This can speed up certain operations when using
expressions
containing equality or relational operators on indexed columns. This only
applies to tables.
Manage a table's columns--a column can be added, removed, or have its
type and properties
modified, including whether it is
dictionary
encoded or not.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
Updates (adds or changes) metadata for tables. The metadata key and
values must both be strings. This is an easy way to annotate whole tables
rather
than single records within tables. Some examples of metadata are owner of
the
table, table creation timestamp etc.
Parameters:
Name |
Type |
Description |
table_names |
Array.<String>
|
Names of the tables whose metadata will be
updated, in [schema_name.]table_name format,
using standard name resolution rules. All
specified tables must exist, or an error will
be returned. |
metadata_map |
Object
|
A map which contains the metadata of the
tables that are to be updated. Note that only
one map is provided for all the tables; so the
change will be applied to every table. If the
provided map is empty, then all existing
metadata for the table(s) will be cleared. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
Updates (adds or changes) metadata for tables. The metadata key and
values must both be strings. This is an easy way to annotate whole tables
rather
than single records within tables. Some examples of metadata are owner of
the
table, table creation timestamp etc.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_table_monitor(topic_id, monitor_updates_map, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
topic_id |
String
|
The topic ID returned by
GPUdb#create_table_monitor . |
monitor_updates_map |
Object
|
Map containing the properties of the
table monitor to be updated. Error if
empty.
- 'schema_name': Updates the
schema name. If
schema_name
doesn't exist, an error will be thrown.
If schema_name is empty,
then the user's
default schema will be used.
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_table_monitor_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_table_request(request, callback) → {Promise}
Apply various modifications to a table or view. The
available modifications include the following:
Manage a table's columns--a column can be added, removed, or have its
type and properties
modified, including whether it is
dictionary
encoded or not.
External tables cannot be modified except for their refresh method.
Create or delete a column,
chunk
skip, or
geospatial index. This can speed up
certain operations when using expressions containing equality or relational
operators on indexed columns. This only applies to tables.
Create or delete a foreign key
on a particular column.
Manage a
range-partitioned or a
manual list-partitioned
table's partitions.
Set (or reset) the tier strategy
of a table or view.
Refresh and manage the refresh mode of a
materialized
view or an
external
table.
Set the time-to-live
(TTL). This can be applied
to tables or views.
Set the global access mode (i.e. locking) for a table. This setting trumps
any
role-based access controls that may be in place; e.g., a user with write
access
to a table marked read-only will not be able to insert records into it. The
mode
can be set to read-only, write-only, read/write, and no access.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_tier(name, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the tier to be altered. Must be an existing
tier group name. |
options |
Object
|
Optional parameters.
- 'capacity': Maximum size in bytes this
tier may hold at once.
- 'high_watermark': Threshold of usage
of this tier's resource that once exceeded, will
trigger watermark-based eviction from this tier.
- 'low_watermark': Threshold of resource
usage that once fallen below after crossing the
high_watermark , will cease
watermark-based eviction from this tier.
- 'wait_timeout': Timeout in seconds for
reading from or writing to this resource. Applies
to cold storage tiers only.
- 'persist': If
true the
system configuration will be written to disk upon
successful application of this request. This will
commit the changes from this request and any
additional in-memory modifications.
Supported values:
The default value is 'true'.
- 'rank': Apply the requested change
only to a specific rank.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_tier_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_user(name, action, value, options, callback) → {Promise}
Alters a user.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user to be altered. Must be an existing
user. |
action |
String
|
Modification operation to be applied to the user.
Supported values:
- 'set_password': Sets the password of
the user. The user must be an internal user.
- 'set_resource_group': Sets the resource
group for an internal user. The resource group must
exist, otherwise, an empty string assigns the user
to the default resource group.
- 'set_default_schema': Set the
default_schema for an internal user. An empty string
means the user will have no default schema.
|
value |
String
|
The value of the modification, depending on
action . |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_user_request(request, callback) → {Promise}
Alters a user.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_video(path, options, callback) → {Promise}
Alters a video.
Parameters:
Name |
Type |
Description |
path |
String
|
Fully-qualified KiFS path to the video to be
altered. |
options |
Object
|
Optional parameters.
- 'ttl': Sets the TTL
of the video.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
alter_video_request(request, callback) → {Promise}
Alters a video.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
append_records(table_name, source_table_name, field_map, options, callback) → {Promise}
Append (or insert) all records from a source table
(specified by source_table_name
) to a particular target table
(specified by table_name
). The field map
(specified by field_map
) holds the user specified map of target
table
column names with their mapped source column names.
Parameters:
Name |
Type |
Description |
table_name |
String
|
The table name for the records to be appended,
in [schema_name.]table_name format, using
standard name resolution rules. Must
be an existing table. |
source_table_name |
String
|
The source table name to get records
from, in [schema_name.]table_name format,
using standard name resolution rules.
Must be an existing table name. |
field_map |
Object
|
Contains the mapping of column names from the
target table (specified by
table_name ) as the keys, and
corresponding column names or expressions (e.g.,
'col_name+1') from the source table (specified by
source_table_name ). Must be existing
column names in source table and target table,
and their types must be matched. For details on
using expressions, see Expressions. |
options |
Object
|
Optional parameters.
- 'offset': A positive integer
indicating the number of initial results to skip
from
source_table_name . Default is 0.
The minimum allowed value is 0. The maximum allowed
value is MAX_INT. The default value is '0'.
- 'limit': A positive integer indicating
the maximum number of results to be returned from
source_table_name . Or END_OF_SET
(-9999) to indicate that the max number of results
should be returned. The default value is '-9999'.
- 'expression': Optional filter
expression to apply to the
source_table_name . The default value
is ''.
- 'order_by': Comma-separated list of
the columns to be sorted by from source table
(specified by
source_table_name ),
e.g., 'timestamp asc, x desc'. The
order_by columns do not have to be
present in field_map . The default
value is ''.
- 'update_on_existing_pk': Specifies the
record collision policy for inserting source table
records (specified by
source_table_name ) into a target table
(specified by table_name ) with a primary key. If
set to true , any existing table record
with
primary key values that match those of a source
table record being inserted will be replaced by
that
new record (the new data will be "upserted"). If
set to
false , any existing table record with
primary
key values that match those of a source table
record being inserted will remain unchanged, while
the
source record will be rejected and an error handled
as determined by
ignore_existing_pk . If the specified
table does not have a primary key,
then this option has no effect.
Supported values:
- 'true': Upsert new records when
primary keys match existing records
- 'false': Reject new records when
primary keys match existing records
The default value is 'false'.
- 'ignore_existing_pk': Specifies the
record collision error-suppression policy for
inserting source table records (specified by
source_table_name ) into a target table
(specified by table_name ) with a primary key, only
used when not in upsert mode (upsert mode is
disabled when
update_on_existing_pk is
false ). If set to
true , any source table record being
inserted that
is rejected for having primary key values that
match those of an existing target table record will
be ignored with no error generated. If
false ,
the rejection of any source table record for having
primary key values matching an existing target
table record will result in an error being raised.
If the specified table does not have a primary
key or if upsert mode is in effect
(update_on_existing_pk is
true ), then this option has no effect.
Supported values:
- 'true': Ignore source table records
whose primary key values collide with those of
target table records
- 'false': Raise an error for any source
table record whose primary key values collide with
those of a target table record
The default value is 'false'.
- 'truncate_strings': If set to
true , it allows inserting longer
strings into smaller charN string columns by
truncating the longer strings to fit.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
append_records_request(request, callback) → {Promise}
Append (or insert) all records from a source table
(specified by source_table_name
) to a particular target table
(specified by table_name
). The field map
(specified by field_map
) holds the user specified map of target
table
column names with their mapped source column names.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
clear_statistics(table_name, column_name, options, callback) → {Promise}
Clears statistics (cardinality, mean value, etc.) for a column in a
specified table.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of a table, in [schema_name.]table_name
format, using standard name resolution rules. Must be
an existing table. |
column_name |
String
|
Name of the column in table_name
for which to clear statistics. The column must
be from an existing table. An empty string
clears statistics for all columns in the table. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
clear_statistics_request(request, callback) → {Promise}
Clears statistics (cardinality, mean value, etc.) for a column in a
specified table.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
clear_table(table_name, authorization, options, callback) → {Promise}
Clears (drops) one or all tables in the database cluster. The
operation is synchronous meaning that the table will be cleared before the
function returns. The response payload returns the status of the operation
along
with the name of the table that was cleared.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be cleared, in
[schema_name.]table_name format, using standard
name resolution rules. Must be
an existing table. Empty string clears all
available tables, though this behavior is be
prevented by default via gpudb.conf parameter
'disable_clear_all'. |
authorization |
String
|
No longer used. User can pass an empty
string. |
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
true and if the table specified in
table_name does not exist no error is
returned. If false and if the table
specified in table_name does not exist
then an error is returned.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
clear_table_monitor(topic_id, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
topic_id |
String
|
The topic ID returned by
GPUdb#create_table_monitor . |
options |
Object
|
Optional parameters.
- 'keep_autogenerated_sink': If
true , the auto-generated datasink associated with this
monitor, if there is one, will be retained for
further use. If false , then the
auto-generated sink will be dropped if there are no
other monitors referencing it.
Supported values:
The default value is 'false'.
- 'clear_all_references': If
true , all references that share the
same topic_id will be cleared.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
clear_table_monitor_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
clear_table_request(request, callback) → {Promise}
Clears (drops) one or all tables in the database cluster. The
operation is synchronous meaning that the table will be cleared before the
function returns. The response payload returns the status of the operation
along
with the name of the table that was cleared.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
clear_trigger(trigger_id, options, callback) → {Promise}
Clears or cancels the trigger identified by the specified handle. The output
returns the handle of the trigger cleared as well as indicating success or
failure of the trigger deactivation.
Parameters:
Name |
Type |
Description |
trigger_id |
String
|
ID for the trigger to be deactivated. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
clear_trigger_request(request, callback) → {Promise}
Clears or cancels the trigger identified by the specified handle. The output
returns the handle of the trigger cleared as well as indicating success or
failure of the trigger deactivation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
collect_statistics(table_name, column_names, options, callback) → {Promise}
Collect statistics for a column(s) in a specified table.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of a table, in [schema_name.]table_name
format, using standard name resolution rules. Must
be an existing table. |
column_names |
Array.<String>
|
List of one or more column names in
table_name for which to collect
statistics (cardinality, mean value, etc.). |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
collect_statistics_request(request, callback) → {Promise}
Collect statistics for a column(s) in a specified table.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_credential(credential_name, type, identity, secret, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
credential_name |
String
|
Name of the credential to be created. Must
contain only letters, digits, and
underscores, and cannot begin with a digit.
Must not match an existing credential name. |
type |
String
|
Type of the credential to be created.
Supported values:
- 'aws_access_key'
- 'aws_iam_role'
- 'azure_ad'
- 'azure_oauth'
- 'azure_sas'
- 'azure_storage_key'
- 'docker'
- 'gcs_service_account_id'
- 'gcs_service_account_keys'
- 'hdfs'
- 'jdbc'
- 'kafka'
- 'confluent'
|
identity |
String
|
User of the credential to be created. |
secret |
String
|
Password of the credential to be created. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_credential_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_datasink(name, destination, options, callback) → {Promise}
Creates a
data
sink, which contains the
destination information for a data sink that is external to the database.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the data sink to be created. |
destination |
String
|
Destination for the output data in format
'storage_provider_type://path[:port]'.
Supported storage provider types are 'azure',
'gcs', 'hdfs', 'http', 'https', 'jdbc', 'kafka'
and 's3'. |
options |
Object
|
Optional parameters.
- 'connection_timeout': Timeout in
seconds for connecting to this data sink
- 'wait_timeout': Timeout in seconds for
waiting for a response from this data sink
- 'credential': Name of the credential object to be used in
this data sink
- 's3_bucket_name': Name of the Amazon
S3 bucket to use as the data sink
- 's3_region': Name of the Amazon S3
region where the given bucket is located
- 's3_verify_ssl': Set to false for
testing purposes or when necessary to bypass TLS
errors (e.g. self-signed certificates). This value
is true by default.
Supported values:
The default value is 'true'.
- 's3_use_virtual_addressing': When true
(default), the requests URI should be specified in
virtual-hosted-style format where the bucket name
is part of the domain name in the URL.
Otherwise set to false to use path-style URI for
requests.
Supported values:
The default value is 'true'.
- 's3_aws_role_arn': Amazon IAM Role ARN
which has required S3 permissions that can be
assumed for the given S3 IAM user
- 's3_encryption_customer_algorithm':
Customer encryption algorithm used encrypting data
- 's3_encryption_customer_key': Customer
encryption key to encrypt or decrypt data
- 's3_encryption_type': Server side
encryption type
- 's3_kms_key_id': KMS key
- 'hdfs_kerberos_keytab': Kerberos
keytab file location for the given HDFS user. This
may be a KIFS file.
- 'hdfs_delegation_token': Delegation
token for the given HDFS user
- 'hdfs_use_kerberos': Use kerberos
authentication for the given HDFS cluster
Supported values:
The default value is 'false'.
- 'azure_storage_account_name': Name of
the Azure storage account to use as the data sink,
this is valid only if tenant_id is specified
- 'azure_container_name': Name of the
Azure storage container to use as the data sink
- 'azure_tenant_id': Active Directory
tenant ID (or directory ID)
- 'azure_sas_token': Shared access
signature token for Azure storage account to use as
the data sink
- 'azure_oauth_token': Oauth token to
access given storage container
- 'gcs_bucket_name': Name of the Google
Cloud Storage bucket to use as the data sink
- 'gcs_project_id': Name of the Google
Cloud project to use as the data sink
- 'gcs_service_account_keys': Google
Cloud service account keys to use for
authenticating the data sink
- 'jdbc_driver_jar_path': JDBC driver
jar file location
- 'jdbc_driver_class_name': Name of the
JDBC driver class
- 'kafka_topic_name': Name of the Kafka
topic to publish to if
destination is
a Kafka broker
- 'max_batch_size': Maximum number of
records per notification message. The default
value is '1'.
- 'max_message_size': Maximum size in
bytes of each notification message. The default
value is '1000000'.
- 'json_format': The desired format of
JSON encoded notifications message.
If
nested , records are returned as an
array. Otherwise, only a single record per messages
is returned.
Supported values:
The default value is 'flat'.
- 'use_managed_credentials': When no
credentials are supplied, we use anonymous access
by default. If this is set, we will use cloud
provider user settings.
Supported values:
The default value is 'false'.
- 'use_https': Use https to connect to
datasink if true, otherwise use http
Supported values:
The default value is 'true'.
- 'skip_validation': Bypass validation
of connection to this data sink.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_datasink_request(request, callback) → {Promise}
Creates a
data
sink, which contains the
destination information for a data sink that is external to the database.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_datasource(name, location, user_name, password, options, callback) → {Promise}
Creates a
data
source, which contains the
location and connection information for a data store that is external to the
database.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the data source to be created. |
location |
String
|
Location of the remote storage in
'storage_provider_type://[storage_path[:storage_port]]'
format.
Supported storage provider types are
'azure','gcs','hdfs','jdbc','kafka', 'confluent'
and 's3'. |
user_name |
String
|
Name of the remote system user; may be an empty
string |
password |
String
|
Password for the remote system user; may be an
empty string |
options |
Object
|
Optional parameters.
- 'skip_validation': Bypass validation
of connection to remote source.
Supported values:
The default value is 'false'.
- 'connection_timeout': Timeout in
seconds for connecting to this storage provider
- 'wait_timeout': Timeout in seconds for
reading from this storage provider
- 'credential': Name of the credential object to be used in
data source
- 's3_bucket_name': Name of the Amazon
S3 bucket to use as the data source
- 's3_region': Name of the Amazon S3
region where the given bucket is located
- 's3_verify_ssl': Set to false for
testing purposes or when necessary to bypass TLS
errors (e.g. self-signed certificates). This value
is true by default.
Supported values:
The default value is 'true'.
- 's3_use_virtual_addressing': Whether
to use virtual addressing when referencing the
Amazon S3 source
Supported values:
- 'true': The requests URI should be
specified in virtual-hosted-style format where the
bucket name is part of the domain name in the URL.
- 'false': Use path-style URI for
requests.
The default value is 'true'.
- 's3_aws_role_arn': Amazon IAM Role ARN
which has required S3 permissions that can be
assumed for the given S3 IAM user
- 's3_encryption_customer_algorithm':
Customer encryption algorithm used encrypting data
- 's3_encryption_customer_key': Customer
encryption key to encrypt or decrypt data
- 'hdfs_kerberos_keytab': Kerberos
keytab file location for the given HDFS user. This
may be a KIFS file.
- 'hdfs_delegation_token': Delegation
token for the given HDFS user
- 'hdfs_use_kerberos': Use kerberos
authentication for the given HDFS cluster
Supported values:
The default value is 'false'.
- 'azure_storage_account_name': Name of
the Azure storage account to use as the data
source, this is valid only if tenant_id is
specified
- 'azure_container_name': Name of the
Azure storage container to use as the data source
- 'azure_tenant_id': Active Directory
tenant ID (or directory ID)
- 'azure_sas_token': Shared access
signature token for Azure storage account to use as
the data source
- 'azure_oauth_token': OAuth token to
access given storage container
- 'gcs_bucket_name': Name of the Google
Cloud Storage bucket to use as the data source
- 'gcs_project_id': Name of the Google
Cloud project to use as the data source
- 'gcs_service_account_keys': Google
Cloud service account keys to use for
authenticating the data source
- 'is_stream': To load from Azure/GCS/S3
as a stream continuously.
Supported values:
The default value is 'false'.
- 'kafka_topic_name': Name of the Kafka
topic to use as the data source
- 'jdbc_driver_jar_path': JDBC driver
jar file location. This may be a KIFS file.
- 'jdbc_driver_class_name': Name of the
JDBC driver class
- 'anonymous': Use anonymous connection
to storage provider--DEPRECATED: this is now the
default. Specify use_managed_credentials for
non-anonymous connection.
Supported values:
The default value is 'true'.
- 'use_managed_credentials': When no
credentials are supplied, we use anonymous access
by default. If this is set, we will use cloud
provider user settings.
Supported values:
The default value is 'false'.
- 'use_https': Use https to connect to
datasource if true, otherwise use http
Supported values:
The default value is 'true'.
- 'schema_registry_location': Location
of Confluent Schema Registry in
'[storage_path[:storage_port]]' format.
- 'schema_registry_credential':
Confluent Schema Registry credential object name.
- 'schema_registry_port': Confluent
Schema Registry port (optional).
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_datasource_request(request, callback) → {Promise}
Creates a
data
source, which contains the
location and connection information for a data store that is external to the
database.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_directory(directory_name, options, callback) → {Promise}
Creates a new directory in
KiFS. The new
directory serves as a location in which the user can upload files using
GPUdb#upload_files
.
Parameters:
Name |
Type |
Description |
directory_name |
String
|
Name of the directory in KiFS to be created. |
options |
Object
|
Optional parameters.
- 'create_home_directory': When set, a
home directory is created for the user name
provided in the value. The
directory_name must be an empty string
in this case. The user must exist.
- 'data_limit': The maximum capacity, in
bytes, to apply to the created directory. Set to -1
to indicate no upper limit. If empty, the system
default limit is applied.
- 'no_error_if_exists': If
true , does not return an error if the
directory already exists
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_directory_request(request, callback) → {Promise}
Creates a new directory in
KiFS. The new
directory serves as a location in which the user can upload files using
GPUdb#upload_files
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_environment(environment_name, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
environment_name |
String
|
Name of the environment to be created. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_environment_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_graph(graph_name, directed_graph, nodes, edges, weights, restrictions, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph resource to generate. |
directed_graph |
Boolean
|
If set to true , the graph will
be directed. If set to false ,
the graph will not be directed. Consult Directed Graphs for more
details.
Supported values:
The default value is true. |
nodes |
Array.<String>
|
Nodes represent fundamental topological units of a
graph.
Nodes must be specified using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing column names,
e.g.,
'table.column AS NODE_ID', expressions, e.g.,
'ST_MAKEPOINT(column1, column2) AS NODE_WKTPOINT',
or constant values, e.g.,
'{9, 10, 11} AS NODE_ID'.
If using constant values in an identifier
combination, the number of values
specified must match across the combination. |
edges |
Array.<String>
|
Edges represent the required fundamental
topological unit of
a graph that typically connect nodes. Edges must be
specified using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing column names,
e.g.,
'table.column AS EDGE_ID', expressions, e.g.,
'SUBSTR(column, 1, 6) AS EDGE_NODE1_NAME', or
constant values, e.g.,
"{'family', 'coworker'} AS EDGE_LABEL".
If using constant values in an identifier
combination, the number of values
specified must match across the combination. |
weights |
Array.<String>
|
Weights represent a method of informing the graph
solver of
the cost of including a given edge in a solution.
Weights must be specified
using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing column
names, e.g.,
'table.column AS WEIGHTS_EDGE_ID', expressions,
e.g.,
'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED', or
constant values, e.g.,
'{4, 15} AS WEIGHTS_VALUESPECIFIED'.
If using constant values in an identifier
combination, the number of values specified
must match across the combination. |
restrictions |
Array.<String>
|
Restrictions represent a method of informing
the graph
solver which edges and/or nodes should be
ignored for the solution. Restrictions
must be specified using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing column
names, e.g.,
'table.column AS RESTRICTIONS_EDGE_ID',
expressions, e.g.,
'column/2 AS RESTRICTIONS_VALUECOMPARED', or
constant values, e.g.,
'{0, 0, 0, 1} AS
RESTRICTIONS_ONOFFCOMPARED'.
If using constant values in an identifier
combination, the number of values
specified must match across the combination. |
options |
Object
|
Optional parameters.
- 'merge_tolerance': If node geospatial
positions are input (e.g., WKTPOINT, X, Y),
determines the minimum separation allowed between
unique nodes. If nodes are within the tolerance of
each other, they will be merged as a single node.
The default value is '1.0E-5'.
- 'recreate': If set to
true and the graph (using
graph_name ) already exists, the graph
is deleted and recreated.
Supported values:
The default value is 'false'.
- 'save_persist': If set to
true , the graph will be saved in the
persist directory (see the config reference for more
information). If set to false , the
graph will be removed when the graph server is
shutdown.
Supported values:
The default value is 'false'.
- 'add_table_monitor': Adds a table
monitor to every table used in the creation of the
graph; this table monitor will trigger the graph to
update dynamically upon inserts to the source
table(s). Note that upon database restart, if
save_persist is also set to
true , the graph will be fully
reconstructed and the table monitors will be
reattached. For more details on table monitors, see
GPUdb#create_table_monitor .
Supported values:
The default value is 'false'.
- 'graph_table': If specified, the
created graph is also created as a table with the
given name, in [schema_name.]table_name format,
using standard name resolution rules and meeting
table naming criteria. The table
will have the following identifier columns:
'EDGE_ID', 'EDGE_NODE1_ID', 'EDGE_NODE2_ID'. If
left blank, no table is created. The default value
is ''.
- 'add_turns': Adds dummy 'pillowed'
edges around intersection nodes where there are
more than three edges so that additional weight
penalties can be imposed by the solve endpoints.
(increases the total number of edges).
Supported values:
The default value is 'false'.
- 'is_partitioned':
Supported values:
The default value is 'false'.
- 'server_id': Indicates which graph
server(s) to send the request to. Default is to
send to the server with the most available memory.
- 'use_rtree': Use an range tree
structure to accelerate and improve the accuracy of
snapping, especially to edges.
Supported values:
The default value is 'true'.
- 'label_delimiter': If provided the
label string will be split according to this
delimiter and each sub-string will be applied as a
separate label onto the specified edge. The
default value is ''.
- 'allow_multiple_edges': Multigraph
choice; allowing multiple edges with the same node
pairs if set to true, otherwise, new edges with
existing same node pairs will not be inserted.
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_graph_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_job(endpoint, request_encoding, data, data_str, options, callback) → {Promise}
Create a job which will run asynchronously. The response returns a job ID,
which can be used to query the status and result of the job. The status and
the result of the job upon completion can be requested by
GPUdb#get_job
.
Parameters:
Name |
Type |
Description |
endpoint |
String
|
Indicates which endpoint to execute, e.g.
'/alter/table'. |
request_encoding |
String
|
The encoding of the request payload for
the job.
Supported values:
The default value is 'binary'. |
data |
String
|
Binary-encoded payload for the job to be run
asynchronously. The payload must contain the relevant
input parameters for the endpoint indicated in
endpoint . Please see the documentation
for the appropriate endpoint to see what values must
(or can) be specified. If this parameter is used,
then request_encoding must be
binary or snappy . |
data_str |
String
|
JSON-encoded payload for the job to be run
asynchronously. The payload must contain the
relevant input parameters for the endpoint
indicated in endpoint . Please see
the documentation for the appropriate endpoint to
see what values must (or can) be specified. If
this parameter is used, then
request_encoding must be
json . |
options |
Object
|
Optional parameters.
- 'remove_job_on_complete':
Supported values:
- 'job_tag': Tag to use for submitted
job. The same tag could be used on backup cluster
to retrieve response for the job. Tags can use
letter, numbers, '_' and '-'
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_job_request(request, callback) → {Promise}
Create a job which will run asynchronously. The response returns a job ID,
which can be used to query the status and result of the job. The status and
the result of the job upon completion can be requested by
GPUdb#get_job
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_join_table(join_table_name, table_names, column_names, expressions, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
join_table_name |
String
|
Name of the join table to be created, in
[schema_name.]table_name format, using
standard name resolution rules and
meeting table naming criteria. |
table_names |
Array.<String>
|
The list of table names composing the join,
each in [schema_name.]table_name format,
using standard name resolution rules.
Corresponds to a SQL statement FROM clause. |
column_names |
Array.<String>
|
List of member table columns or column
expressions to be included in the join.
Columns can be prefixed with
'table_id.column_name', where 'table_id' is
the table name or alias. Columns can be
aliased via the syntax 'column_name as
alias'. Wild cards '*' can be used to
include all columns across member tables or
'table_id.*' for all of a single table's
columns. Columns and column expressions
composing the join must be uniquely named or
aliased--therefore, the '*' wild card cannot
be used if column names aren't unique across
all tables. |
expressions |
Array.<String>
|
An optional list of expressions to combine
and filter the joined tables. Corresponds to
a SQL statement WHERE clause. For details
see: expressions. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of join_table_name . This is
always allowed even if the caller does not have
permission to create tables. The generated name is
returned in qualified_join_table_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the join as part
of
join_table_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
join. If the schema is non-existent, it will be
automatically created. The default value is ''.
- 'max_query_dimensions': No longer
used.
- 'optimize_lookups': Use more memory to
speed up the joining of tables.
Supported values:
The default value is 'false'.
- 'strategy_definition': The tier strategy for the table and
its columns.
- 'ttl': Sets the TTL
of the join table specified in
join_table_name .
- 'view_id': view this projection is
part of. The default value is ''.
- 'no_count': Return a count of 0 for
the join table for logging and for
GPUdb#show_table ; optimization needed
for large overlapped equi-join stencils. The
default value is 'false'.
- 'chunk_size': Maximum number of
records per joined-chunk for this table. Defaults
to the gpudb.conf file chunk size
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_join_table_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_materialized_view(table_name, options, callback) → {Promise}
Initiates the process of creating a materialized view, reserving the
view's name to prevent other views or tables from being created with that
name.
For materialized view details and examples, see
Materialized
Views.
The response contains view_id
, which is used to tag each
subsequent
operation (projection, union, aggregation, filter, or join) that will
compose
the view.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be created that is the
top-level table of the materialized view, in
[schema_name.]table_name format, using standard
name resolution rules and
meeting table naming criteria. |
options |
Object
|
Optional parameters.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the materialized
view as part of
table_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema which is
to contain the newly created view. If the schema
provided is non-existent, it will be automatically
created.
- 'execute_as': User name to use to run
the refresh job
- 'persist': If
true , then
the materialized view specified in
table_name will be persisted and will
not expire unless a ttl is specified.
If false , then the materialized view
will be an in-memory table and will expire unless a
ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'refresh_span': Sets the future
time-offset(in seconds) at which periodic refresh
stops
- 'refresh_stop_time': When
refresh_method is
periodic , specifies the time at which
a periodic refresh is stopped. Value is a datetime
string with format 'YYYY-MM-DD HH:MM:SS'.
- 'refresh_method': Method by which the
join can be refreshed when the data in underlying
member tables have changed.
Supported values:
- 'manual': Refresh only occurs when
manually requested by calling
GPUdb#alter_table with an 'action' of
'refresh'
- 'on_query': Refresh any time the view
is queried.
- 'on_change': If possible,
incrementally refresh (refresh just those records
added) whenever an insert, update, delete or
refresh of input table is done. A full refresh is
done if an incremental refresh is not possible.
- 'periodic': Refresh table periodically
at rate specified by
refresh_period
The default value is 'manual'.
- 'refresh_period': When
refresh_method is
periodic , specifies the period in
seconds at which refresh occurs
- 'refresh_start_time': When
refresh_method is
periodic , specifies the first time at
which a refresh is to be done. Value is a datetime
string with format 'YYYY-MM-DD HH:MM:SS'.
- 'ttl': Sets the TTL
of the table specified in
table_name .
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_materialized_view_request(request, callback) → {Promise}
Initiates the process of creating a materialized view, reserving the
view's name to prevent other views or tables from being created with that
name.
For materialized view details and examples, see
Materialized
Views.
The response contains view_id
, which is used to tag each
subsequent
operation (projection, union, aggregation, filter, or join) that will
compose
the view.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_proc(proc_name, execution_mode, files, command, args, options, callback) → {Promise}
Creates an instance (proc) of the
user-defined functions
(UDF) specified by the
given command, options, and files, and makes it available for execution.
Parameters:
Name |
Type |
Description |
proc_name |
String
|
Name of the proc to be created. Must not be the
name of a currently existing proc. |
execution_mode |
String
|
The execution mode of the proc.
Supported values:
- 'distributed': Input table data
will be divided into data
segments that are distributed across all
nodes in the cluster, and the proc
command will be invoked once per data
segment in parallel. Output table data
from each invocation will be saved to the
same node as the corresponding input
data.
- 'nondistributed': The proc
command will be invoked only once per
execution, and will not have direct access
to any tables named as input or
output table parameters in the call to
GPUdb#execute_proc . It will,
however, be able to access the database
using native API calls.
The default value is 'distributed'. |
files |
Object
|
A map of the files that make up the proc. The keys of
the
map are file names, and the values are the binary
contents of the files. The
file names may include subdirectory names (e.g.
'subdir/file') but must not
resolve to a directory above the root for the proc.
Files may be loaded from existing files in KiFS.
Those file names should be
prefixed with the uri kifs:// and the values in the
map should be empty |
command |
String
|
The command (excluding arguments) that will be
invoked when
the proc is executed. It will be invoked from the
directory containing the proc
files and may be any command that can
be resolved from that directory.
It need not refer to a file actually in that
directory; for example, it could be
'java' if the proc is a Java application; however,
any necessary external
programs must be preinstalled on every database
node. If the command refers to a
file in that directory, it must be preceded with
'./' as per Linux convention.
If not specified, and exactly one file is provided
in files , that file
will be invoked. |
args |
Array.<String>
|
An array of command-line arguments that will be
passed to command when the proc is
executed. |
options |
Object
|
Optional parameters.
- 'max_concurrency_per_node': The
maximum number of concurrent instances of the proc
that will be executed per node. 0 allows unlimited
concurrency. The default value is '0'.
- 'set_environment': A python
environment to use when executing the proc. Must be
an existing environment, else an error will be
returned. The default value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_proc_request(request, callback) → {Promise}
Creates an instance (proc) of the
user-defined functions
(UDF) specified by the
given command, options, and files, and makes it available for execution.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_projection(table_name, projection_name, column_names, options, callback) → {Promise}
Creates a new
projection of
an existing table. A projection represents a subset of the columns
(potentially
including derived columns) of a table.
For projection details and examples, see
Projections. For
limitations, see
Projection Limitations and Cautions.
Window functions,
which can perform
operations like moving averages, are available through this endpoint as well
as
GPUdb#get_records_by_column
.
A projection can be created with a different
shard key
than the source table.
By specifying shard_key
, the projection will be sharded
according to the specified columns, regardless of how the source table is
sharded. The source table can even be unsharded or replicated.
If table_name
is empty, selection is performed against a
single-row
virtual table. This can be useful in executing temporal
(NOW()), identity
(USER()), or
constant-based functions
(GEODIST(-77.11, 38.88, -71.06, 42.36)).
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the existing table on which the
projection is to be applied, in
[schema_name.]table_name format, using standard
name resolution rules. An
empty table name creates a projection from a
single-row virtual table, where columns
specified should be constants or constant
expressions. |
projection_name |
String
|
Name of the projection to be created, in
[schema_name.]table_name format, using
standard name resolution rules and
meeting table naming criteria. |
column_names |
Array.<String>
|
List of columns from table_name
to be included in the projection. Can
include derived columns. Can be specified as
aliased via the syntax 'column_name as
alias'. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of projection_name . If
persist is false (or
unspecified), then this is always allowed even if
the caller does not have permission to create
tables. The generated name is returned in
qualified_projection_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the projection as
part of
projection_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
projection. If the schema is non-existent, it will
be automatically created. The default value is ''.
- 'expression': An optional filter expression to be applied to the
source table prior to the projection. The default
value is ''.
- 'is_replicated': If
true
then the projection will be replicated even if the
source table is not.
Supported values:
The default value is 'false'.
- 'offset': The number of initial
results to skip (this can be useful for paging
through the results). The default value is '0'.
- 'limit': The number of records to
keep. The default value is '-9999'.
- 'order_by': Comma-separated list of
the columns to be sorted by; e.g. 'timestamp asc, x
desc'. The columns specified must be present in
column_names . If any alias is given
for any column name, the alias must be used, rather
than the original column name. The default value
is ''.
- 'chunk_size': Indicates the number of
records per chunk to be used for this projection.
- 'create_indexes': Comma-separated list
of columns on which to create indexes on the
projection. The columns specified must be present
in
column_names . If any alias is
given for any column name, the alias must be used,
rather than the original column name.
- 'ttl': Sets the TTL
of the projection specified in
projection_name .
- 'shard_key': Comma-separated list of
the columns to be sharded on; e.g. 'column1,
column2'. The columns specified must be present in
column_names . If any alias is given
for any column name, the alias must be used, rather
than the original column name. The default value
is ''.
- 'persist': If
true , then
the projection specified in
projection_name will be persisted and
will not expire unless a ttl is
specified. If false , then the
projection will be an in-memory table and will
expire unless a ttl is specified
otherwise.
Supported values:
The default value is 'false'.
- 'preserve_dict_encoding': If
true , then columns that were dict
encoded in the source table will be dict encoded in
the projection.
Supported values:
The default value is 'true'.
- 'retain_partitions': Determines
whether the created projection will retain the
partitioning scheme from the source table.
Supported values:
The default value is 'false'.
- 'partition_type': Partitioning scheme to use.
Supported values:
- 'partition_keys': Comma-separated list
of partition keys, which are the columns or column
expressions by which records will be assigned to
partitions defined by
partition_definitions .
- 'partition_definitions':
Comma-separated list of partition definitions,
whose format depends on the choice of
partition_type . See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example
formats.
- 'is_automatic_partition': If
true , a new partition will be created
for values which don't fall into an existing
partition. Currently only supported for list partitions.
Supported values:
The default value is 'false'.
- 'view_id': ID of view of which this
projection is a member. The default value is ''.
- 'strategy_definition': The tier strategy for the table and
its columns.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_projection_request(request, callback) → {Promise}
Creates a new
projection of
an existing table. A projection represents a subset of the columns
(potentially
including derived columns) of a table.
For projection details and examples, see
Projections. For
limitations, see
Projection Limitations and Cautions.
Window functions,
which can perform
operations like moving averages, are available through this endpoint as well
as
GPUdb#get_records_by_column
.
A projection can be created with a different
shard key
than the source table.
By specifying shard_key
, the projection will be sharded
according to the specified columns, regardless of how the source table is
sharded. The source table can even be unsharded or replicated.
If table_name
is empty, selection is performed against a
single-row
virtual table. This can be useful in executing temporal
(NOW()), identity
(USER()), or
constant-based functions
(GEODIST(-77.11, 38.88, -71.06, 42.36)).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_resource_group(name, tier_attributes, ranking, adjoining_resource_group, options, callback) → {Promise}
Creates a new resource group to facilitate resource management.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the group to be created. Must contain only
letters, digits, and underscores, and cannot begin
with a digit. Must not match existing resource group
name. |
tier_attributes |
Object
|
Optional map containing tier names and
their respective attribute group limits.
The only valid attribute limit that can be
set is max_memory (in bytes) for the VRAM &
RAM tiers.
For instance, to set max VRAM capacity to
1GB and max RAM capacity to 10GB, use:
{'VRAM':{'max_memory':'1000000000'},
'RAM':{'max_memory':'10000000000'}}
- 'max_memory': Maximum amount
of memory usable in the given tier at one
time for this group.
|
ranking |
String
|
Indicates the relative ranking among existing
resource groups where this new resource group will
be placed. When using before or
after , specify which resource group
this one will be inserted before or after in
adjoining_resource_group .
Supported values:
- 'first'
- 'last'
- 'before'
- 'after'
|
adjoining_resource_group |
String
|
If ranking is
before or
after , this field
indicates the resource group
before or after which the current
group will be placed; otherwise,
leave blank. |
options |
Object
|
Optional parameters.
- 'max_cpu_concurrency': Maximum number
of simultaneous threads that will be used to
execute a request for this group.
- 'max_data': Maximum amount of
cumulative ram usage regardless of tier status for
this group.
- 'max_scheduling_priority': Maximum
priority of a scheduled task for this group.
- 'max_tier_priority': Maximum priority
of a tiered object for this group.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_resource_group_request(request, callback) → {Promise}
Creates a new resource group to facilitate resource management.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_role(name, options, callback) → {Promise}
Creates a new role.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the role to be created. Must contain only
lowercase letters, digits, and underscores, and cannot
begin with a digit. Must not be the same name as an
existing user or role. |
options |
Object
|
Optional parameters.
- 'resource_group': Name of an existing
resource group to associate with this user
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_role_request(request, callback) → {Promise}
Creates a new role.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_schema(schema_name, options, callback) → {Promise}
Creates a SQL-style
schema. Schemas are containers for tables and views.
Multiple tables and views can be defined with the same name in different
schemas.
Parameters:
Name |
Type |
Description |
schema_name |
String
|
Name of the schema to be created. Has the same
naming restrictions as tables. |
options |
Object
|
Optional parameters.
- 'no_error_if_exists': If
true , prevents an error from occurring
if the schema already exists.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_schema_request(request, callback) → {Promise}
Creates a SQL-style
schema. Schemas are containers for tables and views.
Multiple tables and views can be defined with the same name in different
schemas.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_table(table_name, type_id, options, callback) → {Promise}
Creates a new table. If a new table is being created,
the type of the table is given by
type_id
, which must be the ID
of
a currently registered type (i.e. one created via
GPUdb#create_type
).
A table may optionally be designated to use a
replicated
distribution scheme,
or be assigned: foreign keys to
other tables, a partitioning
scheme, and/or a tier strategy.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be created, in
[schema_name.]table_name format, using standard
name resolution rules and
meeting table naming criteria. Error
for requests with existing table of the same
name and type ID may be suppressed by using the
no_error_if_exists option. |
type_id |
String
|
ID of a currently registered type. All objects
added to the newly created table will be of this
type. |
options |
Object
|
Optional parameters.
- 'no_error_if_exists': If
true , prevents an error from occurring
if the table already exists and is of the given
type. If a table with the same ID but a different
type exists, it is still an error.
Supported values:
The default value is 'false'.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of table_name . If
is_result_table is true ,
then this is always allowed even if the caller does
not have permission to create tables. The generated
name is returned in
qualified_table_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema as part of
table_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema which is
to contain the newly created table. If the schema
is non-existent, it will be automatically created.
- 'is_collection': [DEPRECATED--please
use
GPUdb#create_schema to create a
schema instead] Indicates whether to create a
schema instead of a table.
Supported values:
The default value is 'false'.
- 'disallow_homogeneous_tables': No
longer supported; value will be ignored.
Supported values:
The default value is 'false'.
- 'is_replicated': Affects the distribution scheme for the
table's data. If
true and the given
type has no explicit shard key defined, the table will
be replicated. If
false , the table will be sharded according to the shard
key specified in the given type_id , or
randomly sharded, if no shard key
is specified. Note that a type containing a shard
key cannot be used to create a replicated table.
Supported values:
The default value is 'false'.
- 'foreign_keys': Semicolon-separated
list of foreign keys, of the format
'(source_column_name [, ...]) references
target_table_name(primary_key_column_name [, ...])
[as foreign_key_name]'.
- 'foreign_shard_key': Foreign shard key
of the format 'source_column references
shard_by_column from
target_table(primary_key_column)'.
- 'partition_type': Partitioning scheme to use.
Supported values:
- 'partition_keys': Comma-separated list
of partition keys, which are the columns or column
expressions by which records will be assigned to
partitions defined by
partition_definitions .
- 'partition_definitions':
Comma-separated list of partition definitions,
whose format depends on the choice of
partition_type . See range partitioning, interval partitioning, list partitioning, hash partitioning, or series partitioning for example
formats.
- 'is_automatic_partition': If
true , a new partition will be created
for values which don't fall into an existing
partition. Currently only supported for list partitions.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL
of the table specified in
table_name .
- 'chunk_size': Indicates the number of
records per chunk to be used for this table.
- 'is_result_table': Indicates whether
the table is a memory-only table. A result table
cannot contain columns with store_only or
text_search data-handling or that are non-charN strings, and it will
not be retained if the server is restarted.
Supported values:
The default value is 'false'.
- 'strategy_definition': The tier strategy for the table and
its columns.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_table_external(table_name, filepaths, modify_columns, create_table_options, options, callback) → {Promise}
Creates a new
external table, which is a
local database object whose source data is located externally to the
database. The source data can
be located either in
KiFS;
on the cluster, accessible to the database; or
remotely, accessible via a pre-defined external
data source.
The external table can have its structure defined explicitly, via
create_table_options
,
which contains many of the options from GPUdb#create_table
; or
defined implicitly, inferred
from the source data.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be created, in
[schema_name.]table_name format, using
standard name resolution rules and
meeting
table naming criteria. |
filepaths |
Array.<String>
|
A list of file paths from which data will be
sourced;
For paths in KiFS, use the uri prefix of
kifs:// followed by the path to
a file or directory. File matching by prefix is
supported, e.g. kifs://dir/file would match
dir/file_1
and dir/file_2. When prefix matching is used,
the path must start with a full, valid KiFS
directory name.
If an external data source is specified in
datasource_name , these file
paths must resolve to accessible files at that
data source location. Prefix matching is
supported.
If the data source is hdfs, prefixes must be
aligned with directories, i.e. partial file
names will
not match.
If no data source is specified, the files are
assumed to be local to the database and must
all be
accessible to the gpudb user, residing on the
path (or relative to the path) specified by the
external files directory in the Kinetica
configuration file. Wildcards
(*) can be used to
specify a group of files. Prefix matching is
supported, the prefixes must be aligned with
directories.
If the first path ends in .tsv, the text
delimiter will be defaulted to a tab character.
If the first path ends in .psv, the text
delimiter will be defaulted to a pipe character
(|). |
modify_columns |
Object
|
Not implemented yet |
create_table_options |
Object
|
Options from
GPUdb#create_table ,
allowing the structure of the table to
be defined independently of the data
source
- 'type_id': ID of a
currently registered type.
- 'no_error_if_exists': If
true ,
prevents an error from occurring if
the table already exists and is of the
given type. If a table with
the same name but a different type
exists, it is still an error.
Supported values:
The default value is 'false'.
- 'is_replicated': Affects
the distribution scheme
for the table's data. If
true and the
given table has no explicit shard key defined,
the
table will be replicated. If
false , the table will be
sharded according to
the shard key specified in the
given type_id , or
randomly sharded, if
no shard key is specified.
Note that a type containing a shard
key cannot be used to create a
replicated table.
Supported values:
The default value is 'false'.
- 'foreign_keys':
Semicolon-separated list of
foreign keys, of the
format
'(source_column_name [, ...])
references
target_table_name(primary_key_column_name
[, ...]) [as foreign_key_name]'.
- 'foreign_shard_key':
Foreign shard key of the format
'source_column references
shard_by_column from
target_table(primary_key_column)'.
- 'partition_type': Partitioning scheme
to use.
Supported values:
- 'partition_keys':
Comma-separated list of partition
keys, which are the columns or
column expressions by which records
will be assigned to partitions defined
by
partition_definitions .
- 'partition_definitions':
Comma-separated list of partition
definitions, whose format depends
on the choice of
partition_type . See
range partitioning,
interval
partitioning,
list partitioning,
hash partitioning,
or
series partitioning
for example formats.
- 'is_automatic_partition':
If
true ,
a new partition will be created for
values which don't fall into an
existing partition. Currently,
only supported for list partitions.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL of the table
specified in
table_name .
- 'chunk_size': Indicates
the number of records per chunk to be
used for this table.
- 'is_result_table':
Indicates whether the table is a
memory-only table. A
result table cannot contain
columns with store_only or text_search
data-handling or
that are
non-charN strings,
and it will not be retained if
the server is restarted.
Supported values:
The default value is 'false'.
- 'strategy_definition':
The tier strategy
for the table and its columns.
|
options |
Object
|
Optional parameters.
- 'bad_record_table_name': Name of a
table to which records that were rejected are
written.
The bad-record-table has the following columns:
line_number (long), line_rejected (string),
error_message (string). When
error_handling is
abort , bad records table is not
populated.
- 'bad_record_table_limit': A positive
integer indicating the maximum number of records
that can be
written to the bad-record-table. The default value
is '10000'.
- 'bad_record_table_limit_per_input':
For subscriptions, a positive integer indicating
the maximum number
of records that can be written to the
bad-record-table per file/payload. Default value
will be
bad_record_table_limit and total size
of the table per rank is limited to
bad_record_table_limit .
- 'batch_size': Number of records to
insert per batch when inserting data. The default
value is '50000'.
- 'column_formats': For each target
column specified, applies the column-property-bound
format to the source data loaded into that column.
Each column format will contain a mapping of one
or more of its column properties to an appropriate
format for each property. Currently supported
column properties include date, time, & datetime.
The parameter value must be formatted as a JSON
string of maps of column names to maps of column
properties to their corresponding column formats,
e.g.,
'{ "order_date" : { "date" : "%Y.%m.%d" },
"order_time" : { "time" : "%H:%M:%S" } }'.
See
default_column_formats for valid
format syntax.
- 'columns_to_load': Specifies a
comma-delimited list of columns from the source
data to
load. If more than one file is being loaded, this
list applies to all files.
Column numbers can be specified discretely or as a
range. For example, a value of '5,7,1..3' will
insert values from the fifth column in the source
data into the first column in the target table,
from the seventh column in the source data into the
second column in the target table, and from the
first through third columns in the source data into
the third through fifth columns in the target
table.
If the source data contains a header, column names
matching the file header names may be provided
instead of column numbers. If the target table
doesn't exist, the table will be created with the
columns in this order. If the target table does
exist with columns in a different order than the
source data, this list can be used to match the
order of the target table. For example, a value of
'C, B, A' will create a three column table with
column C, followed by column B, followed by column
A; or will insert those fields in that order into a
table created with columns in that order. If
the target table exists, the column names must
match the source data field names for a
name-mapping
to be successful.
Mutually exclusive with
columns_to_skip .
- 'columns_to_skip': Specifies a
comma-delimited list of columns from the source
data to
skip. Mutually exclusive with
columns_to_load .
- 'compression_type': Source data
compression type
Supported values:
- 'none': No compression.
- 'auto': Auto detect compression type
- 'gzip': gzip file compression.
- 'bzip2': bzip2 file compression.
The default value is 'auto'.
- 'datasource_name': Name of an existing
external data source from which data file(s)
specified in
filepaths will be loaded
- 'default_column_formats': Specifies
the default format to be applied to source data
loaded
into columns with the corresponding column
property. Currently supported column properties
include
date, time, & datetime. This default
column-property-bound format can be overridden by
specifying a
column property & format for a given target column
in
column_formats . For
each specified annotation, the format will apply to
all columns with that annotation unless a custom
column_formats for that annotation is
specified.
The parameter value must be formatted as a JSON
string that is a map of column properties to their
respective column formats, e.g., '{ "date" :
"%Y.%m.%d", "time" : "%H:%M:%S" }'. Column
formats are specified as a string of control
characters and plain text. The supported control
characters are 'Y', 'm', 'd', 'H', 'M', 'S', and
's', which follow the Linux 'strptime()'
specification, as well as 's', which specifies
seconds and fractional seconds (though the
fractional
component will be truncated past milliseconds).
Formats for the 'date' annotation must include the
'Y', 'm', and 'd' control characters. Formats for
the 'time' annotation must include the 'H', 'M',
and either 'S' or 's' (but not both) control
characters. Formats for the 'datetime' annotation
meet both the 'date' and 'time' control character
requirements. For example, '{"datetime" : "%m/%d/%Y
%H:%M:%S" }' would be used to interpret text
as "05/04/2000 12:12:11"
- 'error_handling': Specifies how errors
should be handled upon insertion.
Supported values:
- 'permissive': Records with missing
columns are populated with nulls if possible;
otherwise, the malformed records are skipped.
- 'ignore_bad_records': Malformed
records are skipped.
- 'abort': Stops current insertion and
aborts entire operation when an error is
encountered. Primary key collisions are considered
abortable errors in this mode.
The default value is 'abort'.
- 'external_table_type': Specifies
whether the external table holds a local copy of
the external data.
Supported values:
- 'materialized': Loads a copy of the
external data into the database, refreshed on
demand
- 'logical': External data will not be
loaded into the database; the data will be
retrieved from the source upon servicing each query
against the external table
The default value is 'materialized'.
- 'file_type': Specifies the type of the
file(s) whose records will be inserted.
Supported values:
- 'avro': Avro file format
- 'delimited_text': Delimited text file
format; e.g., CSV, TSV, PSV, etc.
- 'gdb': Esri/GDB file format
- 'json': Json file format
- 'parquet': Apache Parquet file format
- 'shapefile': ShapeFile file format
The default value is 'delimited_text'.
- 'gdal_configuration_options': Comma
separated list of gdal conf options, for the
specific requets: key=value
- 'ignore_existing_pk': Specifies the
record collision error-suppression policy for
inserting into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
update_on_existing_pk is
false ). If set to
true , any record being inserted that
is rejected
for having primary key values that match those of
an existing table record will be ignored with no
error generated. If false , the
rejection of any
record for having primary key values matching an
existing record will result in an error being
reported, as determined by
error_handling . If the specified
table does not
have a primary key or if upsert mode is in effect
(update_on_existing_pk is
true ), then this option has no effect.
Supported values:
- 'true': Ignore new records whose
primary key values collide with those of existing
records
- 'false': Treat as errors any new
records whose primary key values collide with those
of existing records
The default value is 'false'.
- 'ingestion_mode': Whether to do a full
load, dry run, or perform a type inference on the
source data.
Supported values:
- 'full': Run a type inference on the
source data (if needed) and ingest
- 'dry_run': Does not load data, but
walks through the source data and determines the
number of valid records, taking into account the
current mode of
error_handling .
- 'type_inference_only': Infer the type
of the source data and return, without ingesting
any data. The inferred type is returned in the
response.
The default value is 'full'.
- 'jdbc_fetch_size': The JDBC fetch
size, which determines how many rows to fetch per
round trip. The default value is '50000'.
- 'kafka_consumers_per_rank': Number of
Kafka consumer threads per rank (valid range 1-6).
The default value is '1'.
- 'kafka_group_id': The group id to be
used when consuming data from a Kafka topic (valid
only for Kafka datasource subscriptions).
- 'kafka_offset_reset_policy': Policy to
determine whether the Kafka data consumption starts
either at earliest offset or latest offset.
Supported values:
The default value is 'earliest'.
- 'kafka_optimistic_ingest': Enable
optimistic ingestion where Kafka topic offsets and
table data are committed independently to achieve
parallelism.
Supported values:
The default value is 'false'.
- 'kafka_subscription_cancel_after':
Sets the Kafka subscription lifespan (in minutes).
Expired subscription will be cancelled
automatically.
- 'kafka_type_inference_fetch_timeout':
Maximum time to collect Kafka messages before type
inferencing on the set of them.
- 'layer': Geo files layer(s) name(s):
comma separated.
- 'loading_mode': Scheme for
distributing the extraction and loading of data
from the source data file(s). This option applies
only when loading files that are local to the
database
Supported values:
- 'head': The head node loads all data.
All files must be available to the head node.
- 'distributed_shared': The head node
coordinates loading data by worker
processes across all nodes from shared files
available to all workers.
NOTE:
Instead of existing on a shared source, the files
can be duplicated on a source local to each host
to improve performance, though the files must
appear as the same data set from the perspective of
all hosts performing the load.
- 'distributed_local': A single worker
process on each node loads all files
that are available to it. This option works best
when each worker loads files from its own file
system, to maximize performance. In order to avoid
data duplication, either each worker performing
the load needs to have visibility to a set of files
unique to it (no file is visible to more than
one node) or the target table needs to have a
primary key (which will allow the worker to
automatically deduplicate data).
NOTE:
If the target table doesn't exist, the table
structure will be determined by the head node. If
the
head node has no files local to it, it will be
unable to determine the structure and the request
will fail.
If the head node is configured to have no worker
processes, no data strictly accessible to the head
node will be loaded.
The default value is 'head'.
- 'local_time_offset': Apply an offset
to Avro local timestamp columns.
- 'max_records_to_load': Limit the
number of records to load in this request: if this
number
is larger than
batch_size , then the
number of records loaded will be
limited to the next whole number of
batch_size (per working thread).
- 'num_tasks_per_rank': Number of tasks
for reading file per rank. Default will be system
configuration parameter,
external_file_reader_num_tasks.
- 'poll_interval': If
true ,
the number of
seconds between attempts to load external files
into the table. If zero, polling will be
continuous
as long as data is found. If no data is found, the
interval will steadily increase to a maximum of
60 seconds. The default value is '0'.
- 'primary_keys': Comma separated list
of column names to set as primary keys, when not
specified in the type.
- 'refresh_method': Method by which the
table can be refreshed from its source data.
Supported values:
- 'manual': Refresh only occurs when
manually requested by invoking the refresh action
of
GPUdb#alter_table on this table.
- 'on_start': Refresh table on database
startup and when manually requested by invoking the
refresh action of
GPUdb#alter_table
on this table.
The default value is 'manual'.
- 'schema_registry_schema_name': Name of
the Avro schema in the schema registry to use when
reading Avro records.
- 'shard_keys': Comma separated list of
column names to set as shard keys, when not
specified in the type.
- 'skip_lines': Skip number of lines
from begining of file.
- 'subscribe': Continuously poll the
data source to check for new data and load it into
the table.
Supported values:
The default value is 'false'.
- 'table_insert_mode': Insertion scheme
to use when inserting records from multiple
shapefiles.
Supported values:
- 'single': Insert all records into a
single table.
- 'table_per_file': Insert records from
each file into a new table corresponding to that
file.
The default value is 'single'.
- 'text_comment_string': Specifies the
character string that should be interpreted as a
comment line
prefix in the source data. All lines in the data
starting with the provided string are ignored.
For
delimited_text
file_type only. The default value is
'#'.
- 'text_delimiter': Specifies the
character delimiting field values in the source
data
and field names in the header (if present).
For
delimited_text
file_type only. The default value is
','.
- 'text_escape_character': Specifies the
character that is used to escape other characters
in
the source data.
An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by
an escape character will be interpreted as the
ASCII bell, backspace, form feed, line feed,
carriage return, horizontal tab, & vertical tab,
respectively. For example, the escape character
followed by an 'n' will be interpreted as a newline
within a field value.
The escape character can also be used to escape the
quoting character, and will be treated as an
escape character whether it is within a quoted
field value or not.
For
delimited_text
file_type only.
- 'text_has_header': Indicates whether
the source data contains a header row.
For
delimited_text
file_type only.
Supported values:
The default value is 'true'.
- 'text_header_property_delimiter':
Specifies the delimiter for
column properties in the header
row (if
present). Cannot be set to same value as
text_delimiter .
For delimited_text
file_type only. The default value is
'|'.
- 'text_null_string': Specifies the
character string that should be interpreted as a
null
value in the source data.
For
delimited_text
file_type only. The default value is
'\\N'.
- 'text_quote_character': Specifies the
character that should be interpreted as a field
value
quoting character in the source data. The
character must appear at beginning and end of field
value
to take effect. Delimiters within quoted fields
are treated as literals and not delimiters. Within
a quoted field, two consecutive quote characters
will be interpreted as a single literal quote
character, effectively escaping it. To not have a
quote character, specify an empty string.
For
delimited_text
file_type only. The default value is
'"'.
- 'text_search_columns': Add
'text_search' property to internally inferenced
string columns.
Comma seperated list of column names or '*' for all
columns. To add 'text_search' property only to
string columns greater than or equal to a minimum
size, also set the
text_search_min_column_length
- 'text_search_min_column_length': Set
the minimum column size for strings to apply the
'text_search' property to. Used only when
text_search_columns has a value.
- 'truncate_strings': If set to
true , truncate string values that are
longer than the column's type size.
Supported values:
The default value is 'false'.
- 'truncate_table': If set to
true , truncates the table specified by
table_name prior to loading the
file(s).
Supported values:
The default value is 'false'.
- 'type_inference_mode': Optimize type
inferencing for either speed or accuracy.
Supported values:
- 'accuracy': Scans data to get
exactly-typed & sized columns for all data scanned.
- 'speed': Scans data and picks the
widest possible column types so that 'all' values
will fit with minimum data scanned
The default value is 'speed'.
- 'remote_query': Remote SQL query from
which data will be sourced
- 'remote_query_filter_column': Name of
column to be used for splitting
remote_query into multiple sub-queries
using the data distribution of given column
- 'remote_query_increasing_column':
Column on subscribed remote query result that will
increase for new records (e.g., TIMESTAMP).
- 'remote_query_partition_column': Alias
name for
remote_query_filter_column .
- 'update_on_existing_pk': Specifies the
record collision policy for inserting into a table
with a primary key. If set to
true , any existing table record with
primary
key values that match those of a record being
inserted will be replaced by that new record (the
new
data will be 'upserted'). If set to
false ,
any existing table record with primary key values
that match those of a record being inserted will
remain unchanged, while the new record will be
rejected and the error handled as determined by
ignore_existing_pk &
error_handling . If the
specified table does not have a primary key, then
this option has no effect.
Supported values:
- 'true': Upsert new records when
primary keys match existing records
- 'false': Reject new records when
primary keys match existing records
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_table_external_request(request, callback) → {Promise}
Creates a new
external table, which is a
local database object whose source data is located externally to the
database. The source data can
be located either in
KiFS;
on the cluster, accessible to the database; or
remotely, accessible via a pre-defined external
data source.
The external table can have its structure defined explicitly, via
create_table_options
,
which contains many of the options from GPUdb#create_table
; or
defined implicitly, inferred
from the source data.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_table_monitor(table_name, options, callback) → {Promise}
Creates a monitor that watches for a single table modification event
type (insert, update, or delete) on a particular table (identified by
table_name
) and forwards event notifications to subscribers via
ZMQ.
After this call completes, subscribe to the returned
topic_id
on the
ZMQ table monitor port (default 9002). Each time an operation of the given
type
on the table completes, a multipart message is published for that topic; the
first part contains only the topic ID, and each subsequent part contains one
binary-encoded Avro object that corresponds to the event and can be decoded
using
type_schema
. The monitor will continue to run (regardless
of
whether or not there are any subscribers) until deactivated with
GPUdb#clear_table_monitor
.
For more information on table monitors, see
Table
Monitors.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to monitor, in
[schema_name.]table_name format, using standard
name resolution rules. |
options |
Object
|
Optional parameters.
- 'event': Type of modification event on
the target table to be monitored by this table
monitor.
Supported values:
- 'insert': Get notifications of new
record insertions. The new row images are forwarded
to the subscribers.
- 'update': Get notifications of update
operations. The modified row count information is
forwarded to the subscribers.
- 'delete': Get notifications of delete
operations. The deleted row count information is
forwarded to the subscribers.
The default value is 'insert'.
- 'monitor_id': ID to use for this
monitor instead of a randomly generated one
- 'datasink_name': Name of an existing
data sink to send change data
notifications to
- 'destination': Destination for the
output data in format
'destination_type://path[:port]'. Supported
destination types are 'http', 'https' and 'kafka'.
- 'kafka_topic_name': Name of the Kafka
topic to publish to if
destination in
options is specified and is a Kafka
broker
- 'increasing_column': Column on
subscribed table that will increase for new records
(e.g., TIMESTAMP).
- 'expression': Filter expression to
limit records for notification
- 'refresh_method': Method controlling
when the table monitor reports changes to the
table_name .
Supported values:
- 'on_change': Report changes as they
occur.
- 'periodic': Report changes
periodically at rate specified by
refresh_period .
The default value is 'on_change'.
- 'refresh_period': When
refresh_method is
periodic , specifies the period in
seconds at which changes are reported.
- 'refresh_start_time': When
refresh_method is
periodic , specifies the first time at
which changes are reported. Value is a datetime
string with format 'YYYY-MM-DD HH:MM:SS'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_table_monitor_request(request, callback) → {Promise}
Creates a monitor that watches for a single table modification event
type (insert, update, or delete) on a particular table (identified by
table_name
) and forwards event notifications to subscribers via
ZMQ.
After this call completes, subscribe to the returned
topic_id
on the
ZMQ table monitor port (default 9002). Each time an operation of the given
type
on the table completes, a multipart message is published for that topic; the
first part contains only the topic ID, and each subsequent part contains one
binary-encoded Avro object that corresponds to the event and can be decoded
using
type_schema
. The monitor will continue to run (regardless
of
whether or not there are any subscribers) until deactivated with
GPUdb#clear_table_monitor
.
For more information on table monitors, see
Table
Monitors.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_table_request(request, callback) → {Promise}
Creates a new table. If a new table is being created,
the type of the table is given by
type_id
, which must be the ID
of
a currently registered type (i.e. one created via
GPUdb#create_type
).
A table may optionally be designated to use a
replicated
distribution scheme,
or be assigned: foreign keys to
other tables, a partitioning
scheme, and/or a tier strategy.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_trigger_by_area(request_id, table_names, x_column_name, x_vector, y_column_name, y_vector, options, callback) → {Promise}
Sets up an area trigger mechanism for two column_names for one or
more tables. (This function is essentially the two-dimensional version of
GPUdb#create_trigger_by_range
.) Once the trigger has been
activated, any
record added to the listed tables(s) via
GPUdb#insert_records
with the
chosen columns' values falling within the specified region will trip the
trigger. All such records will be queued at the trigger port (by default
'9001'
but able to be retrieved via
GPUdb#show_system_status
) for any
listening
client to collect. Active triggers can be cancelled by using the
GPUdb#clear_trigger
endpoint or by clearing all relevant
tables.
The output returns the trigger handle as well as indicating success or
failure
of the trigger activation.
Parameters:
Name |
Type |
Description |
request_id |
String
|
User-created ID for the trigger. The ID can be
alphanumeric, contain symbols, and must contain
at least one character. |
table_names |
Array.<String>
|
Names of the tables on which the trigger will
be activated and maintained, each in
[schema_name.]table_name format, using
standard name resolution rules. |
x_column_name |
String
|
Name of a numeric column on which the trigger
is activated. Usually 'x' for geospatial data
points. |
x_vector |
Array.<Number>
|
The respective coordinate values for the region
on which the trigger is activated. This usually
translates to the x-coordinates of a geospatial
region. |
y_column_name |
String
|
Name of a second numeric column on which the
trigger is activated. Usually 'y' for
geospatial data points. |
y_vector |
Array.<Number>
|
The respective coordinate values for the region
on which the trigger is activated. This usually
translates to the y-coordinates of a geospatial
region. Must be the same length as xvals. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_trigger_by_area_request(request, callback) → {Promise}
Sets up an area trigger mechanism for two column_names for one or
more tables. (This function is essentially the two-dimensional version of
GPUdb#create_trigger_by_range
.) Once the trigger has been
activated, any
record added to the listed tables(s) via
GPUdb#insert_records
with the
chosen columns' values falling within the specified region will trip the
trigger. All such records will be queued at the trigger port (by default
'9001'
but able to be retrieved via
GPUdb#show_system_status
) for any
listening
client to collect. Active triggers can be cancelled by using the
GPUdb#clear_trigger
endpoint or by clearing all relevant
tables.
The output returns the trigger handle as well as indicating success or
failure
of the trigger activation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_trigger_by_range(request_id, table_names, column_name, min, max, options, callback) → {Promise}
Sets up a simple range trigger for a column_name for one or more
tables. Once the trigger has been activated, any record added to the listed
tables(s) via
GPUdb#insert_records
with the chosen
column_name's value
falling within the specified range will trip the trigger. All such records
will
be queued at the trigger port (by default '9001' but able to be retrieved
via
GPUdb#show_system_status
) for any listening client to collect.
Active
triggers can be cancelled by using the
GPUdb#clear_trigger
endpoint or by
clearing all relevant tables.
The output returns the trigger handle as well as indicating success or
failure
of the trigger activation.
Parameters:
Name |
Type |
Description |
request_id |
String
|
User-created ID for the trigger. The ID can be
alphanumeric, contain symbols, and must contain
at least one character. |
table_names |
Array.<String>
|
Tables on which the trigger will be active,
each in [schema_name.]table_name format,
using standard name resolution rules. |
column_name |
String
|
Name of a numeric column_name on which the
trigger is activated. |
min |
Number
|
The lower bound (inclusive) for the trigger range. |
max |
Number
|
The upper bound (inclusive) for the trigger range. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_trigger_by_range_request(request, callback) → {Promise}
Sets up a simple range trigger for a column_name for one or more
tables. Once the trigger has been activated, any record added to the listed
tables(s) via
GPUdb#insert_records
with the chosen
column_name's value
falling within the specified range will trip the trigger. All such records
will
be queued at the trigger port (by default '9001' but able to be retrieved
via
GPUdb#show_system_status
) for any listening client to collect.
Active
triggers can be cancelled by using the
GPUdb#clear_trigger
endpoint or by
clearing all relevant tables.
The output returns the trigger handle as well as indicating success or
failure
of the trigger activation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_type(type_definition, label, properties, options, callback) → {Promise}
Creates a new type describing the layout of a table. The type definition is
a JSON string describing the fields (i.e. columns) of the type. Each field
consists of a name and a data type. Supported data types are: double, float,
int, long, string, and bytes. In addition, one or more properties can be
specified for each column which customize the memory usage and query
availability of that column. Note that some properties are mutually
exclusive--i.e. they cannot be specified for any given column
simultaneously. One example of mutually exclusive properties are
data
and
store_only
.
A single primary key and/or single shard key can
be set across one or more columns. If a primary key is specified, then a
uniqueness constraint is enforced, in that only a single object can exist
with a given primary key column value (or set of values for the key columns,
if using a composite primary key). When
inserting
data into a table with a
primary key, depending on the parameters in the request, incoming objects
with primary key values that match existing objects will either overwrite
(i.e. update) the existing object or will be skipped and not added into the
set.
Example of a type definition with some of the parameters::
{"type":"record",
"name":"point",
"fields":[{"name":"msg_id","type":"string"},
{"name":"x","type":"double"},
{"name":"y","type":"double"},
{"name":"TIMESTAMP","type":"double"},
{"name":"source","type":"string"},
{"name":"group_id","type":"string"},
{"name":"OBJECT_ID","type":"string"}]
}
Properties::
{"group_id":["store_only"],
"msg_id":["store_only","text_search"]
}
Parameters:
Name |
Type |
Description |
type_definition |
String
|
a JSON string describing the columns of the
type to be registered. |
label |
String
|
A user-defined description string which can be used
to differentiate between tables and types with
otherwise identical schemas. |
properties |
Object
|
Each key-value pair specifies the properties to
use for a given column where the key is the
column name. All keys used must be relevant
column names for the given table. Specifying
any property overrides the default properties
for that column (which is based on the column's
data type).
Valid values are:
- 'data': Default property for all
numeric and string type columns; makes the
column available for GPU queries.
- 'text_search': Valid only for
select 'string' columns. Enables full text
search--see Full Text Search for details
and applicable string column types. Can be set
independently of
data and
store_only .
- 'store_only': Persist the column
value but do not make it available to queries
(e.g.
GPUdb#filter )-i.e. it is
mutually exclusive to the data
property. Any 'bytes' type column must have a
store_only property. This property
reduces system memory usage.
- 'disk_optimized': Works in
conjunction with the
data property
for string columns. This property reduces system
disk usage by disabling reverse string lookups.
Queries like GPUdb#filter ,
GPUdb#filter_by_list , and
GPUdb#filter_by_value work as
usual but GPUdb#aggregate_unique
and GPUdb#aggregate_group_by are
not allowed on columns with this property.
- 'timestamp': Valid only for 'long'
columns. Indicates that this field represents a
timestamp and will be provided in milliseconds
since the Unix epoch: 00:00:00 Jan 1 1970.
Dates represented by a timestamp must fall
between the year 1000 and the year 2900.
- 'ulong': Valid only for 'string'
columns. It represents an unsigned long integer
data type. The string can only be interpreted as
an unsigned long data type with minimum value of
zero, and maximum value of 18446744073709551615.
- 'uuid': Valid only for 'string'
columns. It represents an uuid data type.
Internally, it is stored as a 128-bit integer.
- 'decimal': Valid only for 'string'
columns. It represents a SQL type NUMERIC(19,
4) data type. There can be up to 15 digits
before the decimal point and up to four digits
in the fractional part. The value can be
positive or negative (indicated by a minus sign
at the beginning). This property is mutually
exclusive with the
text_search
property.
- 'date': Valid only for 'string'
columns. Indicates that this field represents a
date and will be provided in the format
'YYYY-MM-DD'. The allowable range is 1000-01-01
through 2900-01-01. This property is mutually
exclusive with the
text_search
property.
- 'time': Valid only for 'string'
columns. Indicates that this field represents a
time-of-day and will be provided in the format
'HH:MM:SS.mmm'. The allowable range is
00:00:00.000 through 23:59:59.999. This
property is mutually exclusive with the
text_search property.
- 'datetime': Valid only for 'string'
columns. Indicates that this field represents a
datetime and will be provided in the format
'YYYY-MM-DD HH:MM:SS.mmm'. The allowable range
is 1000-01-01 00:00:00.000 through 2900-01-01
23:59:59.999. This property is mutually
exclusive with the
text_search
property.
- 'char1': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 1 character.
- 'char2': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 2 characters.
- 'char4': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 4 characters.
- 'char8': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 8 characters.
- 'char16': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 16 characters.
- 'char32': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 32 characters.
- 'char64': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 64 characters.
- 'char128': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 128 characters.
- 'char256': This property provides
optimized memory, disk and query performance for
string columns. Strings with this property must
be no longer than 256 characters.
- 'boolean': This property provides
optimized memory and query performance for int
columns. Ints with this property must be between
0 and 1(inclusive)
- 'int8': This property provides
optimized memory and query performance for int
columns. Ints with this property must be between
-128 and +127 (inclusive)
- 'int16': This property provides
optimized memory and query performance for int
columns. Ints with this property must be between
-32768 and +32767 (inclusive)
- 'ipv4': This property provides
optimized memory, disk and query performance for
string columns representing IPv4 addresses (i.e.
192.168.1.1). Strings with this property must be
of the form: A.B.C.D where A, B, C and D are in
the range of 0-255.
- 'wkt': Valid only for 'string' and
'bytes' columns. Indicates that this field
contains geospatial geometry objects in
Well-Known Text (WKT) or Well-Known Binary (WKB)
format.
- 'primary_key': This property
indicates that this column will be part of (or
the entire) primary key.
- 'shard_key': This property
indicates that this column will be part of (or
the entire) shard key.
- 'nullable': This property indicates
that this column is nullable. However, setting
this property is insufficient for making the
column nullable. The user must declare the type
of the column as a union between its regular
type and 'null' in the avro schema for the
record type in
type_definition .
For example, if a column is of type integer and
is nullable, then the entry for the column in
the avro schema must be: ['int', 'null'].
The C++, C#, Java, and Python APIs have built-in
convenience for bypassing setting the avro
schema by hand. For those languages, one can
use this property as usual and not have to worry
about the avro schema for the record.
- 'dict': This property indicates
that this column should be dictionary encoded. It can
only be used in conjunction with restricted
string (charN), int, long or date columns.
Dictionary encoding is best for columns where
the cardinality (the number of unique values) is
expected to be low. This property can save a
large amount of memory.
- 'init_with_now': For 'date',
'time', 'datetime', or 'timestamp' column types,
replace empty strings and invalid timestamps
with 'NOW()' upon insert.
- 'init_with_uuid': For 'uuid' type,
replace empty strings and invalid UUID values
with randomly-generated UUIDs upon insert.
The default value is an empty dict ( {} ). |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_type_request(request, callback) → {Promise}
Creates a new type describing the layout of a table. The type definition is
a JSON string describing the fields (i.e. columns) of the type. Each field
consists of a name and a data type. Supported data types are: double, float,
int, long, string, and bytes. In addition, one or more properties can be
specified for each column which customize the memory usage and query
availability of that column. Note that some properties are mutually
exclusive--i.e. they cannot be specified for any given column
simultaneously. One example of mutually exclusive properties are
data
and
store_only
.
A single primary key and/or single shard key can
be set across one or more columns. If a primary key is specified, then a
uniqueness constraint is enforced, in that only a single object can exist
with a given primary key column value (or set of values for the key columns,
if using a composite primary key). When
inserting
data into a table with a
primary key, depending on the parameters in the request, incoming objects
with primary key values that match existing objects will either overwrite
(i.e. update) the existing object or will be skipped and not added into the
set.
Example of a type definition with some of the parameters::
{"type":"record",
"name":"point",
"fields":[{"name":"msg_id","type":"string"},
{"name":"x","type":"double"},
{"name":"y","type":"double"},
{"name":"TIMESTAMP","type":"double"},
{"name":"source","type":"string"},
{"name":"group_id","type":"string"},
{"name":"OBJECT_ID","type":"string"}]
}
Properties::
{"group_id":["store_only"],
"msg_id":["store_only","text_search"]
}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_union(table_name, table_names, input_column_names, output_column_names, options, callback) → {Promise}
Merges data from one or more tables with comparable data types into a new
table.
The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations,
see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples,
see Intersect. For
limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see
Except. For
limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views on a single table, creates a single
filtered view containing all of the unique records across all of the given
filtered data sets.
Non-charN 'string' and 'bytes' column types cannot be merged, nor can
columns marked as store-only.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be created, in
[schema_name.]table_name format, using standard
name resolution rules and
meeting table naming criteria. |
table_names |
Array.<String>
|
The list of table names to merge, in
[schema_name.]table_name format, using
standard name resolution rules.
Must contain the names of one or more
existing tables. |
input_column_names |
Array.<Array.<String>>
|
The list of columns from each of the
corresponding input tables. |
output_column_names |
Array.<String>
|
The list of names of the columns to
be stored in the output table. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of table_name . If
persist is false (or
unspecified), then this is always allowed even if
the caller does not have permission to create
tables. The generated name is returned in
qualified_table_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the projection as
part of
table_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of the schema for the
output table. If the schema provided is
non-existent, it will be automatically created.
The default value is ''.
- 'mode': If
merge_views ,
then this operation will merge the provided views.
All table_names must be views from the
same underlying base table.
Supported values:
- 'union_all': Retains all rows from the
specified tables.
- 'union': Retains all unique rows from
the specified tables (synonym for
union_distinct ).
- 'union_distinct': Retains all unique
rows from the specified tables.
- 'except': Retains all unique rows from
the first table that do not appear in the second
table (only works on 2 tables).
- 'except_all': Retains all
rows(including duplicates) from the first table
that do not appear in the second table (only works
on 2 tables).
- 'intersect': Retains all unique rows
that appear in both of the specified tables (only
works on 2 tables).
- 'intersect_all': Retains all
rows(including duplicates) that appear in both of
the specified tables (only works on 2 tables).
- 'merge_views': Merge two or more views
(or views of views) of the same base data set into
a new view. If this mode is selected
input_column_names AND
output_column_names must be empty. The
resulting view would match the results of a SQL OR
operation, e.g., if filter 1 creates a view using
the expression 'x = 20' and filter 2 creates a view
using the expression 'x <= 10', then the merge
views operation creates a new view using the
expression 'x = 20 OR x <= 10'.
The default value is 'union_all'.
- 'chunk_size': Indicates the number of
records per chunk to be used for this output table.
- 'create_indexes': Comma-separated list
of columns on which to create indexes on the output
table. The columns specified must be present in
output_column_names .
- 'ttl': Sets the TTL
of the output table specified in
table_name .
- 'persist': If
true , then
the output table specified in
table_name will be persisted and will
not expire unless a ttl is specified.
If false , then the output table will
be an in-memory table and will expire unless a
ttl is specified otherwise.
Supported values:
The default value is 'false'.
- 'view_id': ID of view of which this
output table is a member. The default value is ''.
- 'force_replicated': If
true , then the output table specified
in table_name will be replicated even
if the source tables are not.
Supported values:
The default value is 'false'.
- 'strategy_definition': The tier strategy for the table and
its columns.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_union_request(request, callback) → {Promise}
Merges data from one or more tables with comparable data types into a new
table.
The following merges are supported:
UNION (DISTINCT/ALL) - For data set union details and examples, see Union. For limitations,
see Union Limitations and Cautions.
INTERSECT (DISTINCT/ALL) - For data set intersection details and examples,
see Intersect. For
limitations, see Intersect Limitations.
EXCEPT (DISTINCT/ALL) - For data set subtraction details and examples, see
Except. For
limitations, see Except Limitations.
MERGE VIEWS - For a given set of filtered views on a single table, creates a single
filtered view containing all of the unique records across all of the given
filtered data sets.
Non-charN 'string' and 'bytes' column types cannot be merged, nor can
columns marked as store-only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_user_external(name, options, callback) → {Promise}
Creates a new external user (a user whose credentials are managed by an
external LDAP).
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user to be created. Must exactly match the
user's name in the external LDAP, prefixed with a @.
Must not be the same name as an existing user. |
options |
Object
|
Optional parameters.
- 'resource_group': Name of an existing
resource group to associate with this user
- 'default_schema': Default schema to
associate with this user
- 'create_home_directory': When
true , a home directory in KiFS is
created for this user
Supported values:
The default value is 'true'.
- 'directory_data_limit': The maximum
capacity to apply to the created directory if
create_home_directory is
true . Set to -1 to indicate no upper
limit. If empty, the system default limit is
applied.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_user_external_request(request, callback) → {Promise}
Creates a new external user (a user whose credentials are managed by an
external LDAP).
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_user_internal(name, password, options, callback) → {Promise}
Creates a new internal user (a user whose credentials are managed by the
database system).
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user to be created. Must contain only
lowercase letters, digits, and underscores, and cannot
begin with a digit. Must not be the same name as an
existing user or role. |
password |
String
|
Initial password of the user to be created. May be
an empty string for no password. |
options |
Object
|
Optional parameters.
- 'resource_group': Name of an existing
resource group to associate with this user
- 'default_schema': Default schema to
associate with this user
- 'create_home_directory': When
true , a home directory in KiFS is
created for this user
Supported values:
The default value is 'true'.
- 'directory_data_limit': The maximum
capacity to apply to the created directory if
create_home_directory is
true . Set to -1 to indicate no upper
limit. If empty, the system default limit is
applied.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_user_internal_request(request, callback) → {Promise}
Creates a new internal user (a user whose credentials are managed by the
database system).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_video(attribute, begin, duration_seconds, end, frames_per_second, style, path, style_parameters, options, callback) → {Promise}
Creates a job to generate a sequence of raster images that visualize data
over a specified time.
Parameters:
Name |
Type |
Description |
attribute |
String
|
The animated attribute to map to the video's
frames. Must be present in the LAYERS specified
for the visualization. This is often a
time-related field but may be any numeric type. |
begin |
String
|
The start point for the video. Accepts an expression
evaluable over the attribute . |
duration_seconds |
Number
|
Seconds of video to produce |
end |
String
|
The end point for the video. Accepts an expression
evaluable over the attribute . |
frames_per_second |
Number
|
The presentation frame rate of the
encoded video in frames per second. |
style |
String
|
The name of the visualize mode; should correspond to
the schema used for the style_parameters
field.
Supported values:
- 'chart'
- 'raster'
- 'classbreak'
- 'contour'
- 'heatmap'
- 'labels'
|
path |
String
|
Fully-qualified KiFS path. Write access is
required. A file must not exist at that path, unless
replace_if_exists is true . |
style_parameters |
String
|
A string containing the JSON-encoded
visualize request. Must correspond to the
visualize mode specified in the
style field. |
options |
Object
|
Optional parameters.
- 'ttl': Sets the TTL
of the video.
- 'window': Specified using the
data-type corresponding to the
attribute . For a window of size W, a
video frame rendered for time t will visualize data
in the interval [t-W,t]. The minimum window size is
the interval between successive frames. The
minimum value is the default. If a value less than
the minimum value is specified, it is replaced with
the minimum window size. Larger values will make
changes throughout the video appear more smooth
while smaller values will capture fast variations
in the data.
- 'no_error_if_exists': If
true , does not return an error if the
video already exists. Ignored if
replace_if_exists is
true .
Supported values:
The default value is 'false'.
- 'replace_if_exists': If
true , deletes any existing video with
the same path before creating a new video.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
create_video_request(request, callback) → {Promise}
Creates a job to generate a sequence of raster images that visualize data
over a specified time.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
decode(o) → {Object|Array.<Object>}
Decodes a JSON string, or array of JSON strings, returned from GPUdb into
JSON object(s).
Parameters:
Name |
Type |
Description |
o |
String
|
Array.<String>
|
The JSON string(s) to decode. |
- Source:
Returns:
The decoded JSON object(s).
-
Type
-
Object
|
Array.<Object>
delete_directory(directory_name, options, callback) → {Promise}
Deletes a directory from
KiFS.
Parameters:
Name |
Type |
Description |
directory_name |
String
|
Name of the directory in KiFS to be deleted.
The directory must contain no files, unless
recursive is true |
options |
Object
|
Optional parameters.
- 'recursive': If
true ,
will delete directory and all files residing in it.
If false, directory must be empty for deletion.
Supported values:
The default value is 'false'.
- 'no_error_if_not_exists': If
true , no error is returned if
specified directory does not exist
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_directory_request(request, callback) → {Promise}
Deletes a directory from
KiFS.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_files(file_names, options, callback) → {Promise}
Deletes one or more files from
KiFS.
Parameters:
Name |
Type |
Description |
file_names |
Array.<String>
|
An array of names of files to be deleted. File
paths may contain wildcard characters after
the KiFS directory delimeter.
Accepted wildcard characters are asterisk (*)
to represent any string of zero or more
characters, and question mark (?) to indicate
a single character. |
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
true , no error is returned if a
specified file does not exist
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_files_request(request, callback) → {Promise}
Deletes one or more files from
KiFS.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_graph(graph_name, options, callback) → {Promise}
Deletes an existing graph from the graph server and/or persist.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph to be deleted. |
options |
Object
|
Optional parameters.
- 'delete_persist': If set to
true , the graph is removed from the
server and persist. If set to false ,
the graph is removed from the server but is left in
persist. The graph can be reloaded from persist if
it is recreated with the same 'graph_name'.
Supported values:
The default value is 'true'.
- 'server_id': Indicates which graph
server(s) to send the request to. Default is to
send to get information about all the servers.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_graph_request(request, callback) → {Promise}
Deletes an existing graph from the graph server and/or persist.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_proc(proc_name, options, callback) → {Promise}
Deletes a proc. Any currently running instances of the proc will be killed.
Parameters:
Name |
Type |
Description |
proc_name |
String
|
Name of the proc to be deleted. Must be the name
of a currently existing proc. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_proc_request(request, callback) → {Promise}
Deletes a proc. Any currently running instances of the proc will be killed.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_records(table_name, expressions, options, callback) → {Promise}
Deletes record(s) matching the provided criteria from the given table. The
record selection criteria can either be one or more
expressions
(matching multiple records), a single record
identified by record_id
options, or all records when using
delete_all_records
. Note that the three selection criteria are
mutually exclusive. This operation cannot be run on a view. The operation
is synchronous meaning that a response will not be available until the
request is completely processed and all the matching records are deleted.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table from which to delete records,
in [schema_name.]table_name format, using
standard name resolution rules. Must
contain the name of an existing table; not
applicable to views. |
expressions |
Array.<String>
|
A list of the actual predicates, one for each
select; format should follow the guidelines
provided here. Specifying one or
more expressions is mutually
exclusive to specifying
record_id in the
options . |
options |
Object
|
Optional parameters.
- 'global_expression': An optional
global expression to reduce the search space of the
expressions . The default value is ''.
- 'record_id': A record ID identifying a
single record, obtained at the time of
insertion of the record
or by calling
GPUdb#get_records_from_collection
with the *return_record_ids* option. This option
cannot be used to delete records from replicated tables.
- 'delete_all_records': If set to
true , all records in the table will be
deleted. If set to false , then the
option is effectively ignored.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_records_request(request, callback) → {Promise}
Deletes record(s) matching the provided criteria from the given table. The
record selection criteria can either be one or more
expressions
(matching multiple records), a single record
identified by record_id
options, or all records when using
delete_all_records
. Note that the three selection criteria are
mutually exclusive. This operation cannot be run on a view. The operation
is synchronous meaning that a response will not be available until the
request is completely processed and all the matching records are deleted.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_resource_group(name, options, callback) → {Promise}
Deletes a resource group.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the resource group to be deleted. |
options |
Object
|
Optional parameters.
- 'cascade_delete': If
true , delete any existing entities
owned by this group. Otherwise this request will
return an error of any such entities exist.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_resource_group_request(request, callback) → {Promise}
Deletes a resource group.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_role(name, options, callback) → {Promise}
Deletes an existing role.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the role to be deleted. Must be an existing
role. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_role_request(request, callback) → {Promise}
Deletes an existing role.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_user(name, options, callback) → {Promise}
Deletes an existing user.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user to be deleted. Must be an existing
user. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
delete_user_request(request, callback) → {Promise}
Deletes an existing user.
Note: This method should be used for on-premise deployments only.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
download_files(file_names, read_offsets, read_lengths, options, callback) → {Promise}
Downloads one or more files from
KiFS.
Parameters:
Name |
Type |
Description |
file_names |
Array.<String>
|
An array of the file names to download from
KiFS. File paths may contain wildcard
characters after the KiFS directory delimeter.
Accepted wildcard characters are asterisk (*)
to represent any string of zero or more
characters, and question mark (?) to indicate
a single character. |
read_offsets |
Array.<Number>
|
An array of starting byte offsets from which
to read each
respective file in file_names .
Must either be empty or the same length
as file_names . If empty, files
are downloaded in their entirety. If not
empty, read_lengths must also
not be empty. |
read_lengths |
Array.<Number>
|
Array of number of bytes to read from each
respective file
in file_names . Must either be
empty or the same length as
file_names . If empty, files are
downloaded in their entirety. If not
empty, read_offsets must also
not be empty. |
options |
Object
|
Optional parameters.
- 'file_encoding': Encoding to be
applied to the output file data. When using JSON
serialization it is recommended to specify this as
base64 .
Supported values:
- 'base64': Apply base64 encoding to the
output file data.
- 'none': Do not apply any encoding to
the output file data.
The default value is 'none'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
download_files_request(request, callback) → {Promise}
Downloads one or more files from
KiFS.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
drop_credential(credential_name, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
credential_name |
String
|
Name of the credential to be dropped. Must
be an existing credential. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
drop_credential_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
drop_datasink(name, options, callback) → {Promise}
Drops an existing
data
sink.
By default, if any table monitors use this
sink as a destination, the request will be blocked unless option
clear_table_monitors
is
true
.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the data sink to be dropped. Must be an
existing data sink. |
options |
Object
|
Optional parameters.
- 'clear_table_monitors': If
true , any table monitors that use this data
sink will be cleared.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
drop_datasink_request(request, callback) → {Promise}
Drops an existing
data
sink.
By default, if any table monitors use this
sink as a destination, the request will be blocked unless option
clear_table_monitors
is
true
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
drop_datasource(name, options, callback) → {Promise}
Drops an existing
data source. Any external
tables that depend on the data source must be dropped before it can be
dropped.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the data source to be dropped. Must be an
existing data source. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
drop_datasource_request(request, callback) → {Promise}
Drops an existing
data source. Any external
tables that depend on the data source must be dropped before it can be
dropped.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
drop_environment(environment_name, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
environment_name |
String
|
Name of the environment to be dropped.
Must be an existing environment. |
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
true and if the environment specified
in environment_name does not exist, no
error is returned. If false and if the
environment specified in
environment_name does not exist, then
an error is returned.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
drop_environment_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
drop_schema(schema_name, options, callback) → {Promise}
Drops an existing SQL-style
schema, specified in
schema_name
.
Parameters:
Name |
Type |
Description |
schema_name |
String
|
Name of the schema to be dropped. Must be an
existing schema. |
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
true and if the schema specified in
schema_name does not exist, no error
is returned. If false and if the
schema specified in schema_name does
not exist, then an error is returned.
Supported values:
The default value is 'false'.
- 'cascade': If
true , all
tables within the schema will be dropped. If
false , the schema will be dropped only
if empty.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
drop_schema_request(request, callback) → {Promise}
Drops an existing SQL-style
schema, specified in
schema_name
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
execute_proc(proc_name, params, bin_params, input_table_names, input_column_names, output_table_names, options, callback) → {Promise}
Executes a proc. This endpoint is asynchronous and does not wait for
the proc to complete before returning.
If the proc being executed is distributed, input_table_names
&
input_column_names
may be passed to the proc to use for reading
data,
and output_table_names
may be passed to the proc to use for
writing
data.
If the proc being executed is non-distributed, these table parameters will
be
ignored.
Parameters:
Name |
Type |
Description |
proc_name |
String
|
Name of the proc to execute. Must be the name of
a currently existing proc. |
params |
Object
|
A map containing named parameters to pass to the
proc. Each key/value pair specifies the name of a
parameter and its value. |
bin_params |
Object
|
A map containing named binary parameters to pass
to the proc. Each key/value pair specifies the
name of a parameter and its value. |
input_table_names |
Array.<String>
|
Names of the tables containing data to
be passed to the
proc. Each name specified must be the
name of a currently existing table, in
[schema_name.]table_name format, using
standard
name resolution
rules.
If no table names are specified, no
data will be passed to the proc. This
parameter is ignored if the proc has a
non-distributed execution mode. |
input_column_names |
Object
|
Map of table names from
input_table_names to lists
of names of columns from those tables
that will be passed to the proc. Each
column name specified must be the name
of an existing column in the
corresponding table. If a table name
from input_table_names is
not
included, all columns from that table
will be passed to the proc. This
parameter is ignored if the proc has a
non-distributed execution mode. |
output_table_names |
Array.<String>
|
Names of the tables to which output
data from the proc will
be written, each in
[schema_name.]table_name format, using
standard
name resolution
rules
and meeting table naming
criteria.
If a specified table does not exist,
it will automatically be created with
the
same schema as the corresponding table
(by order) from
input_table_names ,
excluding any primary and shard keys.
If a specified
table is a non-persistent result
table, it must not have primary or
shard keys.
If no table names are specified, no
output data can be returned from the
proc.
This parameter is ignored if the proc
has a non-distributed execution mode. |
options |
Object
|
Optional parameters.
- 'cache_input': A comma-delimited list
of table names from
input_table_names
from which input data will be cached for use in
subsequent calls to
GPUdb#execute_proc with the
use_cached_input option. Cached input
data will be retained until the proc status is
cleared with the
clear_complete
option of GPUdb#show_proc_status and
all proc instances using the cached data have
completed. The default value is ''.
- 'use_cached_input': A comma-delimited
list of run IDs (as returned from prior calls to
GPUdb#execute_proc ) of running or
completed proc instances from which input data
cached using the cache_input option
will be used. Cached input data will not be used
for any tables specified in
input_table_names , but data from all
other tables cached for the specified run IDs will
be passed to the proc. If the same table was cached
for multiple specified run IDs, the cached data
from the first run ID specified in the list that
includes that table will be used. The default
value is ''.
- 'run_tag': A string that, if not
empty, can be used in subsequent calls to
GPUdb#show_proc_status or
GPUdb#kill_proc to identify the proc
instance. The default value is ''.
- 'max_output_lines': The maximum number
of lines of output from stdout and stderr to return
via
GPUdb#show_proc_status . If the
number of lines output exceeds the maximum, earlier
lines are discarded. The default value is '100'.
- 'execute_at_startup': If
true , an instance of the proc will run
when the database is started instead of running
immediately. The run_id can be
retrieved using GPUdb#show_proc and
used in GPUdb#show_proc_status .
Supported values:
The default value is 'false'.
- 'execute_at_startup_as': Sets the
alternate user name to execute this proc instance
as when
execute_at_startup is
true . The default value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
execute_proc_request(request, callback) → {Promise}
Executes a proc. This endpoint is asynchronous and does not wait for
the proc to complete before returning.
If the proc being executed is distributed, input_table_names
&
input_column_names
may be passed to the proc to use for reading
data,
and output_table_names
may be passed to the proc to use for
writing
data.
If the proc being executed is non-distributed, these table parameters will
be
ignored.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
execute_sql(statement, offset, limit, request_schema_str, data, options, callback) → {Promise}
Execute a SQL statement (query, DML, or DDL).
See SQL Support for the complete
set of supported SQL commands.
Parameters:
Name |
Type |
Description |
statement |
String
|
SQL statement (query, DML, or DDL) to be executed |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned, or
END_OF_SET (-9999) to indicate that the maximum
number of results allowed by the server should be
returned. The number of records returned will never
exceed the server's own limit, defined by the
max_get_records_size parameter in
the server configuration.
Use has_more_records to see if more
records exist in the result to be fetched, and
offset & limit to request
subsequent pages of results. |
request_schema_str |
String
|
Avro schema of data . |
data |
Array.<String>
|
An array of binary-encoded data for the records to
be binded to the SQL query. Or use
query_parameters to pass the data in
JSON format. |
options |
Object
|
Optional parameters.
- 'cost_based_optimization': If
false , disables the cost-based
optimization of the given query.
Supported values:
The default value is 'false'.
- 'distributed_joins': If
true , enables the use of distributed
joins in servicing the given query. Any query
requiring a distributed join will succeed, though
hints can be used in the query to change the
distribution of the source data to allow the query
to succeed.
Supported values:
The default value is 'false'.
- 'distributed_operations': If
true , enables the use of distributed
operations in servicing the given query. Any query
requiring a distributed join will succeed, though
hints can be used in the query to change the
distribution of the source data to allow the query
to succeed.
Supported values:
The default value is 'false'.
- 'ignore_existing_pk': Specifies the
record collision error-suppression policy for
inserting into or updating a table with a primary key, only
used when primary key record collisions are
rejected (
update_on_existing_pk
is false ). If set to
true , any record insert/update that is
rejected
for resulting in a primary key collision with an
existing table record will be ignored with no error
generated. If false , the rejection of
any
insert/update for resulting in a primary key
collision will cause an error to be reported. If
the
specified table does not have a primary key or if
update_on_existing_pk is
true , then this option has no effect.
Supported values:
- 'true': Ignore inserts/updates that
result in primary key collisions with existing
records
- 'false': Treat as errors any
inserts/updates that result in primary key
collisions with existing records
The default value is 'false'.
- 'late_materialization': If
true , Joins/Filters results will
always be materialized ( saved to result tables
format)
Supported values:
The default value is 'false'.
- 'paging_table': When empty or the
specified paging table not exists, the system will
create a paging table and return when query output
has more records than the user asked. If the paging
table exists in the system, the records from the
paging table are returned without evaluating the
query.
- 'paging_table_ttl': Sets the TTL
of the paging table.
- 'parallel_execution': If
false , disables the parallel step
execution of the given query.
Supported values:
The default value is 'true'.
- 'plan_cache': If
false ,
disables plan caching for the given query.
Supported values:
The default value is 'true'.
- 'prepare_mode': If
true ,
compiles a query into an execution plan and saves
it in query cache. Query execution is not performed
and an empty response will be returned to user
Supported values:
The default value is 'false'.
- 'preserve_dict_encoding': If
true , then columns that were dict
encoded in the source table will be dict encoded in
the projection table.
Supported values:
The default value is 'true'.
- 'query_parameters': Query parameters
in JSON array or arrays (for inserting multiple
rows). This can be used instead of
data and
request_schema_str .
- 'results_caching': If
false , disables caching of the results
of the given query
Supported values:
The default value is 'true'.
- 'rule_based_optimization': If
false , disables rule-based rewrite
optimizations for the given query
Supported values:
The default value is 'true'.
- 'ssq_optimization': If
false , scalar subqueries will be
translated into joins
Supported values:
The default value is 'true'.
- 'ttl': Sets the TTL
of the intermediate result tables used in query
execution.
- 'update_on_existing_pk': Specifies the
record collision policy for inserting into or
updating
a table with a primary key. If set to
true , any existing table record with
primary
key values that match those of a record being
inserted or updated will be replaced by that
record.
If set to false , any such primary key
collision will result in the insert/update being
rejected and the error handled as determined by
ignore_existing_pk . If the specified
table does not have a primary key,
then this option has no effect.
Supported values:
- 'true': Replace the collided-into
record with the record inserted or updated when a
new/modified record causes a primary key collision
with an existing record
- 'false': Reject the insert or update
when it results in a primary key collision with an
existing record
The default value is 'false'.
- 'validate_change_column': When
changing a column using alter table, validate the
change before applying it. If
true ,
then validate all values. A value too large (or too
long) for the new type will prevent any change. If
false , then when a value is too large
or long, it will be truncated.
Supported values:
The default value is 'true'.
- 'current_schema': Use the supplied
value as the default schema when processing
this SQL command.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
execute_sql_request(request, callback) → {Promise}
Execute a SQL statement (query, DML, or DDL).
See SQL Support for the complete
set of supported SQL commands.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
export_records_to_files(table_name, filepath, options, callback) → {Promise}
Export records from a table to files. All tables can be exported, in full or
partial
(see
columns_to_export
and
columns_to_skip
).
Additional filtering can be applied when using export table with expression
through SQL.
Default destination is KIFS, though other storage types (Azure, S3, GCS, and
HDFS) are supported
through
datasink_name
; see
GPUdb#create_datasink
.
Server's local file system is not supported. Default file format is
delimited text. See options for
different file types and different options for each file type. Table is
saved to a single file if
within max file size limits (may vary depending on datasink type). If not,
then table is split into
multiple files; these may be smaller than the max size limit.
All filenames created are returned in the response.
Parameters:
Name |
Type |
Description |
table_name |
String
|
|
filepath |
String
|
Path to data export target. If
filepath has a file extension, it is
read as the name of a file. If
filepath is a directory, then the
source table name with a
random UUID appended will be used as the name of
each exported file, all written to that directory.
If filepath is a filename, then all exported files
will have a random UUID appended to the given
name. In either case, the target directory
specified or implied must exist. The names of all
exported files are returned in the response. |
options |
Object
|
Optional parameters.
- 'batch_size': Number of records to be
exported as a batch. The default value is
'1000000'.
- 'column_formats': For each source
column specified, applies the column-property-bound
format. Currently supported column properties
include date, time, & datetime. The parameter value
must be formatted as a JSON string of maps of
column names to maps of column properties to their
corresponding column formats, e.g.,
'{ "order_date" : { "date" : "%Y.%m.%d" },
"order_time" : { "time" : "%H:%M:%S" } }'.
See
default_column_formats for valid
format syntax.
- 'columns_to_export': Specifies a
comma-delimited list of columns from the source
table to
export, written to the output file in the order
they are given.
Column names can be provided, in which case the
target file will use those names as the column
headers as well.
Alternatively, column numbers can be
specified--discretely or as a range. For example,
a value of
'5,7,1..3' will write values from the fifth column
in the source table into the first column in the
target file, from the seventh column in the source
table into the second column in the target file,
and from the first through third columns in the
source table into the third through fifth columns
in
the target file.
Mutually exclusive with
columns_to_skip .
- 'columns_to_skip': Comma-separated
list of column names or column numbers to not
export. All columns in the source table not
specified will be written to the target file in the
order they appear in the table definition.
Mutually exclusive with
columns_to_export .
- 'datasink_name': Datasink name,
created using
GPUdb#create_datasink .
- 'default_column_formats': Specifies
the default format to use to write data. Currently
supported column properties include date, time, &
datetime. This default column-property-bound
format can be overridden by specifying a column
property & format for a given source column in
column_formats . For each specified
annotation, the format will apply to all
columns with that annotation unless custom
column_formats for that
annotation are specified.
The parameter value must be formatted as a JSON
string that is a map of column properties to their
respective column formats, e.g., '{ "date" :
"%Y.%m.%d", "time" : "%H:%M:%S" }'. Column
formats are specified as a string of control
characters and plain text. The supported control
characters are 'Y', 'm', 'd', 'H', 'M', 'S', and
's', which follow the Linux 'strptime()'
specification, as well as 's', which specifies
seconds and fractional seconds (though the
fractional
component will be truncated past milliseconds).
Formats for the 'date' annotation must include the
'Y', 'm', and 'd' control characters. Formats for
the 'time' annotation must include the 'H', 'M',
and either 'S' or 's' (but not both) control
characters. Formats for the 'datetime' annotation
meet both the 'date' and 'time' control character
requirements. For example, '{"datetime" : "%m/%d/%Y
%H:%M:%S" }' would be used to write text
as "05/04/2000 12:12:11"
- 'export_ddl': Save DDL to a separate
file. The default value is 'false'.
- 'file_extension': Extension to give
the export file. The default value is '.csv'.
- 'file_type': Specifies the file format
to use when exporting data.
Supported values:
- 'delimited_text': Delimited text file
format; e.g., CSV, TSV, PSV, etc.
- 'parquet'
The default value is 'delimited_text'.
- 'kinetica_header': Whether to include
a Kinetica proprietary header. Will not be
written if
text_has_header is
false .
Supported values:
The default value is 'false'.
- 'kinetica_header_delimiter': If a
Kinetica proprietary header is included, then
specify a
property separator. Different from column
delimiter. The default value is '|'.
- 'compression_type': File compression
type. GZip can be applied to text and Parquet
files. Snappy can only be applied to Parquet
files, and is the default compression for them.
Supported values:
- 'uncompressed'
- 'snappy'
- 'gzip'
- 'single_file': Save records to a
single file. This option may be ignored if file
size exceeds internal file size limits (this limit
will differ on different targets).
Supported values:
- 'true'
- 'false'
- 'overwrite'
The default value is 'true'.
- 'single_file_max_size': Max file size
(in MB) to allow saving to a single file. May be
overridden by target limitations. The default
value is ''.
- 'text_delimiter': Specifies the
character to write out to delimit field values and
field names in the header (if present).
For
delimited_text
file_type only. The default value is
','.
- 'text_has_header': Indicates whether
to write out a header row.
For
delimited_text
file_type only.
Supported values:
The default value is 'true'.
- 'text_null_string': Specifies the
character string that should be written out for the
null
value in the data.
For
delimited_text
file_type only. The default value is
'\\N'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
export_records_to_files_request(request, callback) → {Promise}
Export records from a table to files. All tables can be exported, in full or
partial
(see
columns_to_export
and
columns_to_skip
).
Additional filtering can be applied when using export table with expression
through SQL.
Default destination is KIFS, though other storage types (Azure, S3, GCS, and
HDFS) are supported
through
datasink_name
; see
GPUdb#create_datasink
.
Server's local file system is not supported. Default file format is
delimited text. See options for
different file types and different options for each file type. Table is
saved to a single file if
within max file size limits (may vary depending on datasink type). If not,
then table is split into
multiple files; these may be smaller than the max size limit.
All filenames created are returned in the response.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
export_records_to_table(table_name, remote_query, options, callback) → {Promise}
Exports records from source table to the specified target table in an
external database
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table from which the data will be
exported to remote database, in
[schema_name.]table_name format, using standard
name resolution rules. |
remote_query |
String
|
Parameterized insert query to export gpudb
table data into remote database |
options |
Object
|
Optional parameters.
- 'batch_size': Batch size, which
determines how many rows to export per round trip.
The default value is '200000'.
- 'datasink_name': Name of an existing
external data sink to which table name specified in
table_name will be exported
- 'jdbc_session_init_statement':
Executes the statement per each jdbc session before
doing actual load. The default value is ''.
- 'jdbc_connection_init_statement':
Executes the statement once before doing actual
load. The default value is ''.
- 'remote_table': Name of the target
table to which source table is exported. When this
option is specified remote_query cannot be
specified. The default value is ''.
- 'use_st_geomfrom_casts': Wraps
parametrized variables with st_geomfromtext or
st_geomfromwkb based on source column type
Supported values:
The default value is 'false'.
- 'use_indexed_parameters': Uses $n
style syntax when generating insert query for
remote_table option
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
export_records_to_table_request(request, callback) → {Promise}
Exports records from source table to the specified target table in an
external database
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter(table_name, view_name, expression, options, callback) → {Promise}
Filters data based on the specified expression. The results are
stored in a
result
set with the
given
view_name
.
For details see Expressions.
The response message contains the number of points for which the expression
evaluated to be true, which is equivalent to the size of the result view.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to filter, in
[schema_name.]table_name format, using standard
name resolution rules. This
may be the name of a table or a view (when
chaining queries). |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
expression |
String
|
The select expression to filter the specified
table. For details see Expressions. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema is non-existent,
it will be automatically created.
- 'view_id': view this filtered-view is
part of. The default value is ''.
- 'ttl': Sets the TTL
of the view specified in
view_name .
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_area(table_name, view_name, x_column_name, x_vector, y_column_name, y_vector, options, callback) → {Promise}
Calculates which objects from a table are within a named area of
interest (NAI/polygon). The operation is synchronous, meaning that a
response
will not be returned until all the matching objects are fully available. The
response payload provides the count of the resulting set. A new resultant
set
(view) which satisfies the input NAI restriction specification is created
with
the name view_name
passed in as part of the input.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to filter, in
[schema_name.]table_name format, using standard
name resolution rules. This
may be the name of a table or a view (when
chaining queries). |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
x_column_name |
String
|
Name of the column containing the x values to
be filtered. |
x_vector |
Array.<Number>
|
List of x coordinates of the vertices of the
polygon representing the area to be filtered. |
y_column_name |
String
|
Name of the column containing the y values to
be filtered. |
y_vector |
Array.<Number>
|
List of y coordinates of the vertices of the
polygon representing the area to be filtered. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema provided is
non-existent, it will be automatically created.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_area_geometry(table_name, view_name, column_name, x_vector, y_vector, options, callback) → {Promise}
Calculates which geospatial geometry objects from a table intersect
a named area of interest (NAI/polygon). The operation is synchronous,
meaning
that a response will not be returned until all the matching objects are
fully
available. The response payload provides the count of the resulting set. A
new
resultant set (view) which satisfies the input NAI restriction specification
is
created with the name view_name
passed in as part of the input.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to filter, in
[schema_name.]table_name format, using standard
name resolution rules. This
may be the name of a table or a view (when
chaining queries). |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
column_name |
String
|
Name of the geospatial geometry column to be
filtered. |
x_vector |
Array.<Number>
|
List of x coordinates of the vertices of the
polygon representing the area to be filtered. |
y_vector |
Array.<Number>
|
List of y coordinates of the vertices of the
polygon representing the area to be filtered. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] The schema for the newly
created view. If the schema is non-existent, it
will be automatically created.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_area_geometry_request(request, callback) → {Promise}
Calculates which geospatial geometry objects from a table intersect
a named area of interest (NAI/polygon). The operation is synchronous,
meaning
that a response will not be returned until all the matching objects are
fully
available. The response payload provides the count of the resulting set. A
new
resultant set (view) which satisfies the input NAI restriction specification
is
created with the name view_name
passed in as part of the input.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_area_request(request, callback) → {Promise}
Calculates which objects from a table are within a named area of
interest (NAI/polygon). The operation is synchronous, meaning that a
response
will not be returned until all the matching objects are fully available. The
response payload provides the count of the resulting set. A new resultant
set
(view) which satisfies the input NAI restriction specification is created
with
the name view_name
passed in as part of the input.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_box(table_name, view_name, x_column_name, min_x, max_x, y_column_name, min_y, max_y, options, callback) → {Promise}
Calculates how many objects within the given table lie in a
rectangular box. The operation is synchronous, meaning that a response will
not
be returned until all the objects are fully available. The response payload
provides the count of the resulting set. A new resultant set which satisfies
the
input NAI restriction specification is also created when a
view_name
is
passed in as part of the input payload.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the bounding box
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. Must
be an existing table. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
x_column_name |
String
|
Name of the column on which to perform the
bounding box query. Must be a valid numeric
column. |
min_x |
Number
|
Lower bound for the column chosen by
x_column_name . Must be less than or
equal to max_x . |
max_x |
Number
|
Upper bound for x_column_name . Must be
greater than or equal to min_x . |
y_column_name |
String
|
Name of a column on which to perform the
bounding box query. Must be a valid numeric
column. |
min_y |
Number
|
Lower bound for y_column_name . Must be
less than or equal to max_y . |
max_y |
Number
|
Upper bound for y_column_name . Must be
greater than or equal to min_y . |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema is non-existent,
it will be automatically created.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_box_geometry(table_name, view_name, column_name, min_x, max_x, min_y, max_y, options, callback) → {Promise}
Calculates which geospatial geometry objects from a table intersect
a rectangular box. The operation is synchronous, meaning that a response
will
not be returned until all the objects are fully available. The response
payload
provides the count of the resulting set. A new resultant set which satisfies
the
input NAI restriction specification is also created when a
view_name
is
passed in as part of the input payload.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the bounding box
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. Must be
an existing table. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
column_name |
String
|
Name of the geospatial geometry column to be
filtered. |
min_x |
Number
|
Lower bound for the x-coordinate of the rectangular
box. Must be less than or equal to
max_x . |
max_x |
Number
|
Upper bound for the x-coordinate of the rectangular
box. Must be greater than or equal to
min_x . |
min_y |
Number
|
Lower bound for the y-coordinate of the rectangular
box. Must be less than or equal to
max_y . |
max_y |
Number
|
Upper bound for the y-coordinate of the rectangular
box. Must be greater than or equal to
min_y . |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema provided is
non-existent, it will be automatically created.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_box_geometry_request(request, callback) → {Promise}
Calculates which geospatial geometry objects from a table intersect
a rectangular box. The operation is synchronous, meaning that a response
will
not be returned until all the objects are fully available. The response
payload
provides the count of the resulting set. A new resultant set which satisfies
the
input NAI restriction specification is also created when a
view_name
is
passed in as part of the input payload.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_box_request(request, callback) → {Promise}
Calculates how many objects within the given table lie in a
rectangular box. The operation is synchronous, meaning that a response will
not
be returned until all the objects are fully available. The response payload
provides the count of the resulting set. A new resultant set which satisfies
the
input NAI restriction specification is also created when a
view_name
is
passed in as part of the input payload.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_geometry(table_name, view_name, column_name, input_wkt, operation, options, callback) → {Promise}
Applies a geometry filter against a geospatial geometry column in a
given table or view. The filtering geometry is provided by
input_wkt
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter by
geometry will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. Must
be an existing table or view containing a
geospatial geometry column. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
column_name |
String
|
Name of the column to be used in the filter.
Must be a geospatial geometry column. |
input_wkt |
String
|
A geometry in WKT format that will be used to
filter the objects in table_name . |
operation |
String
|
The geometric filtering operation to perform
Supported values:
- 'contains': Matches records that
contain the given WKT in
input_wkt ,
i.e. the given WKT is within the bounds of a
record's geometry.
- 'crosses': Matches records that
cross the given WKT.
- 'disjoint': Matches records that are
disjoint from the given WKT.
- 'equals': Matches records that are
the same as the given WKT.
- 'intersects': Matches records that
intersect the given WKT.
- 'overlaps': Matches records that
overlap the given WKT.
- 'touches': Matches records that
touch the given WKT.
- 'within': Matches records that are
within the given WKT.
|
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema provided is
non-existent, it will be automatically created.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_geometry_request(request, callback) → {Promise}
Applies a geometry filter against a geospatial geometry column in a
given table or view. The filtering geometry is provided by
input_wkt
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_list(table_name, view_name, column_values_map, options, callback) → {Promise}
Calculates which records from a table have values in the given list
for the corresponding column. The operation is synchronous, meaning that a
response will not be returned until all the objects are fully available. The
response payload provides the count of the resulting set. A new resultant
set
(view) which satisfies the input filter specification is also created if a
view_name
is passed in as part of the request.
For example, if a type definition has the columns 'x' and 'y', then a filter
by
list query with the column map
{"x":["10.1", "2.3"], "y":["0.0", "-31.5", "42.0"]} will return
the count of all data points whose x and y values match both in the
respective
x- and y-lists, e.g., "x = 10.1 and y = 0.0", "x = 2.3 and y = -31.5", etc.
However, a record with "x = 10.1 and y = -31.5" or "x = 2.3 and y = 0.0"
would not be returned because the values in the given lists do not
correspond.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to filter, in
[schema_name.]table_name format, using standard
name resolution rules. This
may be the name of a table or a view (when
chaining queries). |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
column_values_map |
Object
|
List of values for the corresponding
column in the table |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema provided is
non-existent, it will be automatically created.
- 'filter_mode': String indicating the
filter mode, either 'in_list' or 'not_in_list'.
Supported values:
- 'in_list': The filter will match all
items that are in the provided list(s).
- 'not_in_list': The filter will match
all items that are not in the provided list(s).
The default value is 'in_list'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_list_request(request, callback) → {Promise}
Calculates which records from a table have values in the given list
for the corresponding column. The operation is synchronous, meaning that a
response will not be returned until all the objects are fully available. The
response payload provides the count of the resulting set. A new resultant
set
(view) which satisfies the input filter specification is also created if a
view_name
is passed in as part of the request.
For example, if a type definition has the columns 'x' and 'y', then a filter
by
list query with the column map
{"x":["10.1", "2.3"], "y":["0.0", "-31.5", "42.0"]} will return
the count of all data points whose x and y values match both in the
respective
x- and y-lists, e.g., "x = 10.1 and y = 0.0", "x = 2.3 and y = -31.5", etc.
However, a record with "x = 10.1 and y = -31.5" or "x = 2.3 and y = 0.0"
would not be returned because the values in the given lists do not
correspond.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_radius(table_name, view_name, x_column_name, x_center, y_column_name, y_center, radius, options, callback) → {Promise}
Calculates which objects from a table lie within a circle with the
given radius and center point (i.e. circular NAI). The operation is
synchronous,
meaning that a response will not be returned until all the objects are fully
available. The response payload provides the count of the resulting set. A
new
resultant set (view) which satisfies the input circular NAI restriction
specification is also created if a
view_name
is passed in as
part of
the request.
For track data, all track points that lie within the circle plus one point
on
either side of the circle (if the track goes beyond the circle) will be
included
in the result.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter by radius
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. Must
be an existing table. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
x_column_name |
String
|
Name of the column to be used for the
x-coordinate (the longitude) of the center. |
x_center |
Number
|
Value of the longitude of the center. Must be
within [-180.0, 180.0]. |
y_column_name |
String
|
Name of the column to be used for the
y-coordinate-the latitude-of the center. |
y_center |
Number
|
Value of the latitude of the center. Must be
within [-90.0, 90.0]. |
radius |
Number
|
The radius of the circle within which the search
will be performed. Must be a non-zero positive
value. It is in meters; so, for example, a value of
'42000' means 42 km. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema which is
to contain the newly created view. If the schema is
non-existent, it will be automatically created.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_radius_geometry(table_name, view_name, column_name, x_center, y_center, radius, options, callback) → {Promise}
Calculates which geospatial geometry objects from a table intersect
a circle with the given radius and center point (i.e. circular NAI). The
operation is synchronous, meaning that a response will not be returned until
all
the objects are fully available. The response payload provides the count of
the
resulting set. A new resultant set (view) which satisfies the input circular
NAI
restriction specification is also created if a view_name
is
passed in
as part of the request.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter by radius
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. Must
be an existing table. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
column_name |
String
|
Name of the geospatial geometry column to be
filtered. |
x_center |
Number
|
Value of the longitude of the center. Must be
within [-180.0, 180.0]. |
y_center |
Number
|
Value of the latitude of the center. Must be
within [-90.0, 90.0]. |
radius |
Number
|
The radius of the circle within which the search
will be performed. Must be a non-zero positive
value. It is in meters; so, for example, a value of
'42000' means 42 km. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema provided is
non-existent, it will be automatically created.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_radius_geometry_request(request, callback) → {Promise}
Calculates which geospatial geometry objects from a table intersect
a circle with the given radius and center point (i.e. circular NAI). The
operation is synchronous, meaning that a response will not be returned until
all
the objects are fully available. The response payload provides the count of
the
resulting set. A new resultant set (view) which satisfies the input circular
NAI
restriction specification is also created if a view_name
is
passed in
as part of the request.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_radius_request(request, callback) → {Promise}
Calculates which objects from a table lie within a circle with the
given radius and center point (i.e. circular NAI). The operation is
synchronous,
meaning that a response will not be returned until all the objects are fully
available. The response payload provides the count of the resulting set. A
new
resultant set (view) which satisfies the input circular NAI restriction
specification is also created if a
view_name
is passed in as
part of
the request.
For track data, all track points that lie within the circle plus one point
on
either side of the circle (if the track goes beyond the circle) will be
included
in the result.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_range(table_name, view_name, column_name, lower_bound, upper_bound, options, callback) → {Promise}
Calculates which objects from a table have a column that is within
the given bounds. An object from the table identified by
table_name
is
added to the view
view_name
if its column is within
[
lower_bound
,
upper_bound
] (inclusive). The
operation is
synchronous. The response provides a count of the number of objects which
passed
the bound filter. Although this functionality can also be accomplished with
the
standard filter function, it is more efficient.
For track objects, the count reflects how many points fall within the given
bounds (which may not include all the track points of any given track).
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter by range
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. Must
be an existing table. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
column_name |
String
|
Name of a column on which the operation would
be applied. |
lower_bound |
Number
|
Value of the lower bound (inclusive). |
upper_bound |
Number
|
Value of the upper bound (inclusive). |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema is non-existent,
it will be automatically created.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_range_request(request, callback) → {Promise}
Calculates which objects from a table have a column that is within
the given bounds. An object from the table identified by
table_name
is
added to the view
view_name
if its column is within
[
lower_bound
,
upper_bound
] (inclusive). The
operation is
synchronous. The response provides a count of the number of objects which
passed
the bound filter. Although this functionality can also be accomplished with
the
standard filter function, it is more efficient.
For track objects, the count reflects how many points fall within the given
bounds (which may not include all the track points of any given track).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_series(table_name, view_name, track_id, target_track_ids, options, callback) → {Promise}
Filters objects matching all points of the given track (works only
on track type data). It allows users to specify a particular track to find
all
other points in the table that fall within specified ranges (spatial and
temporal) of all points of the given track. Additionally, the user can
specify
another track to see if the two intersect (or go close to each other within
the
specified ranges). The user also has the flexibility of using different
metrics
for the spatial distance calculation: Euclidean (flat geometry) or Great
Circle
(spherical geometry to approximate the Earth's surface distances). The
filtered
points are stored in a newly created result set. The return value of the
function is the number of points in the resultant set (view).
This operation is synchronous, meaning that a response will not be returned
until all the objects are fully available.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter by track
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. Must be
a currently existing table with a track present. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
track_id |
String
|
The ID of the track which will act as the
filtering points. Must be an existing track within
the given table. |
target_track_ids |
Array.<String>
|
Up to one track ID to intersect with the
"filter" track. If any provided, it must
be an valid track ID within the given
set. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema is non-existent,
it will be automatically created.
- 'spatial_radius': A positive number
passed as a string representing the radius of the
search area centered around each track point's
geospatial coordinates. The value is interpreted in
meters. Required parameter.
- 'time_radius': A positive number
passed as a string representing the maximum
allowable time difference between the timestamps of
a filtered object and the given track's points. The
value is interpreted in seconds. Required
parameter.
- 'spatial_distance_metric': A string
representing the coordinate system to use for the
spatial search criteria. Acceptable values are
'euclidean' and 'great_circle'. Optional parameter;
default is 'euclidean'.
Supported values:
- 'euclidean'
- 'great_circle'
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_series_request(request, callback) → {Promise}
Filters objects matching all points of the given track (works only
on track type data). It allows users to specify a particular track to find
all
other points in the table that fall within specified ranges (spatial and
temporal) of all points of the given track. Additionally, the user can
specify
another track to see if the two intersect (or go close to each other within
the
specified ranges). The user also has the flexibility of using different
metrics
for the spatial distance calculation: Euclidean (flat geometry) or Great
Circle
(spherical geometry to approximate the Earth's surface distances). The
filtered
points are stored in a newly created result set. The return value of the
function is the number of points in the resultant set (view).
This operation is synchronous, meaning that a response will not be returned
until all the objects are fully available.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_string(table_name, view_name, expression, mode, column_names, options, callback) → {Promise}
Calculates which objects from a table or view match a string
expression for the given string columns. Setting
case_sensitive
can modify case sensitivity in matching
for all modes except
search
. For
search
mode details and limitations, see
Full Text
Search.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table on which the filter operation
will be performed, in [schema_name.]table_name
format, using standard name resolution rules. Must
be an existing table or view. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
expression |
String
|
The expression with which to filter the table. |
mode |
String
|
The string filtering mode to apply. See below for
details.
Supported values:
- 'search': Full text search query with
wildcards and boolean operators. Note that for this
mode, no column can be specified in
column_names ; all string columns of the
table that have text search enabled will be searched.
- 'equals': Exact whole-string match
(accelerated).
- 'contains': Partial substring match (not
accelerated). If the column is a string type
(non-charN) and the number of records is too large, it
will return 0.
- 'starts_with': Strings that start with
the given expression (not accelerated). If the column
is a string type (non-charN) and the number of records
is too large, it will return 0.
- 'regex': Full regular expression search
(not accelerated). If the column is a string type
(non-charN) and the number of records is too large, it
will return 0.
|
column_names |
Array.<String>
|
List of columns on which to apply the
filter. Ignored for search
mode. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema is non-existent,
it will be automatically created.
- 'case_sensitive': If
false then string filtering will
ignore case. Does not apply to search
mode.
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_string_request(request, callback) → {Promise}
Calculates which objects from a table or view match a string
expression for the given string columns. Setting
case_sensitive
can modify case sensitivity in matching
for all modes except
search
. For
search
mode details and limitations, see
Full Text
Search.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_table(table_name, view_name, column_name, source_table_name, source_table_column_name, options, callback) → {Promise}
Filters objects in one table based on objects in another table. The
user must specify matching column types from the two tables (i.e. the target
table from which objects will be filtered and the source table based on
which
the filter will be created); the column names need not be the same. If a
view_name
is specified, then the filtered objects will then be
put in a
newly created view. The operation is synchronous, meaning that a response
will
not be returned until all objects are fully available in the result view.
The
return value contains the count (i.e. the size) of the resulting view.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table whose data will be filtered,
in [schema_name.]table_name format, using
standard name resolution rules. Must
be an existing table. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
column_name |
String
|
Name of the column by whose value the data will
be filtered from the table designated by
table_name . |
source_table_name |
String
|
Name of the table whose data will be
compared against in the table called
table_name , in
[schema_name.]table_name format, using
standard name resolution rules.
Must be an existing table. |
source_table_column_name |
String
|
Name of the column in the
source_table_name
whose values will be used as the
filter for table
table_name . Must be a
geospatial geometry column if in
'spatial' mode; otherwise, Must
match the type of the
column_name . |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema is non-existent,
it will be automatically created.
- 'filter_mode': String indicating the
filter mode, either
in_table or
not_in_table .
Supported values:
- 'in_table'
- 'not_in_table'
The default value is 'in_table'.
- 'mode': Mode - should be either
spatial or normal .
Supported values:
The default value is 'normal'.
- 'buffer': Buffer size, in meters. Only
relevant for
spatial mode. The
default value is '0'.
- 'buffer_method': Method used to buffer
polygons. Only relevant for
spatial
mode.
Supported values:
- 'normal'
- 'geos': Use geos 1 edge per corner
algorithm
The default value is 'normal'.
- 'max_partition_size': Maximum number
of points in a partition. Only relevant for
spatial mode. The default value is
'0'.
- 'max_partition_score': Maximum number
of points * edges in a partition. Only relevant for
spatial mode. The default value is
'8000000'.
- 'x_column_name': Name of column
containing x value of point being filtered in
spatial mode. The default value is
'x'.
- 'y_column_name': Name of column
containing y value of point being filtered in
spatial mode. The default value is
'y'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_table_request(request, callback) → {Promise}
Filters objects in one table based on objects in another table. The
user must specify matching column types from the two tables (i.e. the target
table from which objects will be filtered and the source table based on
which
the filter will be created); the column names need not be the same. If a
view_name
is specified, then the filtered objects will then be
put in a
newly created view. The operation is synchronous, meaning that a response
will
not be returned until all objects are fully available in the result view.
The
return value contains the count (i.e. the size) of the resulting view.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_value(table_name, view_name, is_string, value, value_str, column_name, options, callback) → {Promise}
Calculates which objects from a table has a particular value for a
particular column. The input parameters provide a way to specify either a
String
or a Double valued column and a desired value for the column on which the
filter
is performed. The operation is synchronous, meaning that a response will not
be
returned until all the objects are fully available. The response payload
provides the count of the resulting set. A new result view which satisfies
the
input filter restriction specification is also created with a view name
passed
in as part of the input payload. Although this functionality can also be
accomplished with the standard filter function, it is more efficient.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of an existing table on which to perform
the calculation, in [schema_name.]table_name
format, using standard name resolution rules. |
view_name |
String
|
If provided, then this will be the name of the
view containing the results, in
[schema_name.]view_name format, using standard name resolution rules and
meeting table naming criteria. Must
not be an already existing table or view. |
is_string |
Boolean
|
Indicates whether the value being searched for
is string or numeric. |
value |
Number
|
The value to search for. |
value_str |
String
|
The string value to search for. |
column_name |
String
|
Name of a column on which the filter by value
would be applied. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of view_name . This is always
allowed even if the caller does not have permission
to create tables. The generated name is returned in
qualified_view_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the view as part
of
view_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created view. If the schema is non-existent,
it will be automatically created.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_by_value_request(request, callback) → {Promise}
Calculates which objects from a table has a particular value for a
particular column. The input parameters provide a way to specify either a
String
or a Double valued column and a desired value for the column on which the
filter
is performed. The operation is synchronous, meaning that a response will not
be
returned until all the objects are fully available. The response payload
provides the count of the resulting set. A new result view which satisfies
the
input filter restriction specification is also created with a view name
passed
in as part of the input payload. Although this functionality can also be
accomplished with the standard filter function, it is more efficient.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
filter_request(request, callback) → {Promise}
Filters data based on the specified expression. The results are
stored in a
result
set with the
given
view_name
.
For details see Expressions.
The response message contains the number of points for which the expression
evaluated to be true, which is equivalent to the size of the result view.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
get_geo_json(table_name, offset, limit, options, callback) → {Promise}
Retrieves records from a given table as a GeoJSON, optionally filtered by an expression
and/or sorted by a column. This operation can be performed on tables, views,
or on homogeneous collections (collections containing tables of all the same
type). Records can be returned encoded as binary, json or geojson.
This operation supports paging through the data via the offset
and limit
parameters. Note that when paging through a table, if
the table (or the underlying table in case of a view) is updated (records
are inserted, deleted or modified) the records retrieved may differ between
calls based on the updates applied.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table from which the records will be
fetched. Must be a table, view or homogeneous
collection. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned. Or END_OF_SET (-9999) to
indicate that the max number of results should be
returned. |
options |
Object
|
- 'expression': Optional filter
expression to apply to the table.
- 'fast_index_lookup': Indicates if
indexes should be used to perform the lookup for a
given expression if possible. Only applicable if
there is no sorting, the expression contains only
equivalence comparisons based on existing tables
indexes and the range of requested values is from
[0 to END_OF_SET].
Supported values:
The default value is 'true'.
- 'sort_by': Optional column that the
data should be sorted by. Empty by default (i.e. no
sorting is applied).
- 'sort_order': String indicating how
the returned values should be sorted - ascending or
descending. If sort_order is provided, sort_by has
to be provided.
Supported values:
The default value is 'ascending'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the GeoJSON
object, if no callback function is provided.
-
Type
-
Promise
Returns an object containing all the custom headers used currently
by the API. Returns a deep copy so that the user does not
accidentally change the actual headers. Note that the API may use other
headers as appropriate; the ones returned here are the custom ones set
up by the user.
- Source:
Returns:
The object containing all the custom headers the
user has set up so far.
-
Type
-
Object
get_job(job_id, options, callback) → {Promise}
Get the status and result of asynchronously running job. See the
GPUdb#create_job
for starting an asynchronous job. Some
fields of the response are filled only after the submitted job has finished
execution.
Parameters:
Name |
Type |
Description |
job_id |
Number
|
A unique identifier for the job whose status and
result is to be fetched. |
options |
Object
|
Optional parameters.
- 'job_tag': Job tag returned in call to
create the job
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
get_job_request(request, callback) → {Promise}
Get the status and result of asynchronously running job. See the
GPUdb#create_job
for starting an asynchronous job. Some
fields of the response are filled only after the submitted job has finished
execution.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
get_records(table_name, offset, limit, options, callback) → {Promise}
Retrieves records from a given table, optionally filtered by an
expression and/or sorted by a column. This operation can be performed on
tables
and views. Records can be returned encoded as binary, json, or geojson.
This operation supports paging through the data via the offset
and
limit
parameters. Note that when paging through a table, if
the table
(or the underlying table in case of a view) is updated (records are
inserted,
deleted or modified) the records retrieved may differ between calls based on
the
updates applied.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table or view from which the records
will be fetched, in [schema_name.]table_name
format, using standard name resolution rules. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned, or
END_OF_SET (-9999) to indicate that the maximum
number of results allowed by the server should be
returned. The number of records returned will never
exceed the server's own limit, defined by the
max_get_records_size parameter in
the server configuration.
Use has_more_records to see if more
records exist in the result to be fetched, and
offset & limit to request
subsequent pages of results. |
options |
Object
|
- 'expression': Optional filter
expression to apply to the table.
- 'fast_index_lookup': Indicates if
indexes should be used to perform the lookup for a
given expression if possible. Only applicable if
there is no sorting, the expression contains only
equivalence comparisons based on existing tables
indexes and the range of requested values is from
[0 to END_OF_SET].
Supported values:
The default value is 'true'.
- 'sort_by': Optional column that the
data should be sorted by. Empty by default (i.e. no
sorting is applied).
- 'sort_order': String indicating how
the returned values should be sorted - ascending or
descending. If sort_order is provided, sort_by has
to be provided.
Supported values:
The default value is 'ascending'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
get_records_by_column(table_name, column_names, offset, limit, options, callback) → {Promise}
For a given table, retrieves the values from the requested
column(s). Maps of column name to the array of values as well as the column
data
type are returned. This endpoint supports pagination with the
offset
and
limit
parameters.
Window functions,
which can perform
operations like moving averages, are available through this endpoint as well
as
GPUdb#create_projection
.
When using pagination, if the table (or the underlying table in the case of
a
view) is modified (records are inserted, updated, or deleted) during a call
to
the endpoint, the records or values retrieved may differ between calls based
on
the type of the update, e.g., the contiguity across pages cannot be relied
upon.
If table_name
is empty, selection is performed against a
single-row
virtual table. This can be useful in executing temporal
(NOW()), identity
(USER()), or
constant-based functions
(GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see:
dynamic
schemas documentation.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table or view on which this
operation will be performed, in
[schema_name.]table_name format, using standard
name resolution rules. An
empty table name retrieves one record from a
single-row virtual table, where columns
specified should be constants or constant
expressions. |
column_names |
Array.<String>
|
The list of column values to retrieve. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned, or
END_OF_SET (-9999) to indicate that the maximum
number of results allowed by the server should be
returned. The number of records returned will never
exceed the server's own limit, defined by the
max_get_records_size parameter in
the server configuration.
Use has_more_records to see if more
records exist in the result to be fetched, and
offset & limit to request
subsequent pages of results. |
options |
Object
|
- 'expression': Optional filter
expression to apply to the table.
- 'sort_by': Optional column that the
data should be sorted by. Used in conjunction with
sort_order . The order_by
option can be used in lieu of sort_by
/ sort_order . The default value is
''.
- 'sort_order': String indicating how
the returned values should be sorted -
ascending or descending .
If sort_order is provided,
sort_by has to be provided.
Supported values:
The default value is 'ascending'.
- 'order_by': Comma-separated list of
the columns to be sorted by as well as the sort
direction, e.g., 'timestamp asc, x desc'. The
default value is ''.
- 'convert_wkts_to_wkbs': If
true , then WKT string columns will be
returned as WKB bytes.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
get_records_by_column_request(request, callback) → {Promise}
For a given table, retrieves the values from the requested
column(s). Maps of column name to the array of values as well as the column
data
type are returned. This endpoint supports pagination with the
offset
and
limit
parameters.
Window functions,
which can perform
operations like moving averages, are available through this endpoint as well
as
GPUdb#create_projection
.
When using pagination, if the table (or the underlying table in the case of
a
view) is modified (records are inserted, updated, or deleted) during a call
to
the endpoint, the records or values retrieved may differ between calls based
on
the type of the update, e.g., the contiguity across pages cannot be relied
upon.
If table_name
is empty, selection is performed against a
single-row
virtual table. This can be useful in executing temporal
(NOW()), identity
(USER()), or
constant-based functions
(GEODIST(-77.11, 38.88, -71.06, 42.36)).
The response is returned as a dynamic schema. For details see:
dynamic
schemas documentation.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
get_records_by_series(table_name, world_table_name, offset, limit, options, callback) → {Promise}
Retrieves the complete series/track records from the given
world_table_name
based on the partial track information
contained in
the
table_name
.
This operation supports paging through the data via the offset
and
limit
parameters.
In contrast to GPUdb#get_records
this returns records grouped
by
series/track. So if offset
is 0 and limit
is 5
this operation
would return the first 5 series/tracks in table_name
. Each
series/track
will be returned sorted by their TIMESTAMP column.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table or view for which
series/tracks will be fetched, in
[schema_name.]table_name format, using standard
name resolution rules. |
world_table_name |
String
|
Name of the table containing the complete
series/track information to be returned
for the tracks present in the
table_name , in
[schema_name.]table_name format, using
standard name resolution rules.
Typically this is used when retrieving
series/tracks from a view (which contains
partial series/tracks) but the user wants
to retrieve the entire original
series/tracks. Can be blank. |
offset |
Number
|
A positive integer indicating the number of initial
series/tracks to skip (useful for paging through the
results). |
limit |
Number
|
A positive integer indicating the maximum number of
series/tracks to be returned. Or END_OF_SET (-9999)
to indicate that the max number of results should be
returned. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
get_records_by_series_request(request, callback) → {Promise}
Retrieves the complete series/track records from the given
world_table_name
based on the partial track information
contained in
the
table_name
.
This operation supports paging through the data via the offset
and
limit
parameters.
In contrast to GPUdb#get_records
this returns records grouped
by
series/track. So if offset
is 0 and limit
is 5
this operation
would return the first 5 series/tracks in table_name
. Each
series/track
will be returned sorted by their TIMESTAMP column.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
get_records_from_collection(table_name, offset, limit, options, callback) → {Promise}
Retrieves records from a collection. The operation can optionally
return the record IDs which can be used in certain queries such as
GPUdb#delete_records
.
This operation supports paging through the data via the offset
and
limit
parameters.
Note that when using the Java API, it is not possible to retrieve records
from
join views using this operation.
(DEPRECATED)
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the collection or table from which
records are to be retrieved, in
[schema_name.]table_name format, using standard
name resolution rules. Must
be an existing collection or table. |
offset |
Number
|
A positive integer indicating the number of initial
results to skip (this can be useful for paging
through the results). |
limit |
Number
|
A positive integer indicating the maximum number of
results to be returned, or
END_OF_SET (-9999) to indicate that the maximum
number of results allowed by the server should be
returned. The number of records returned will never
exceed the server's own limit, defined by the
max_get_records_size parameter in
the server configuration.
Use offset & limit to
request subsequent pages of results. |
options |
Object
|
- 'return_record_ids': If
true then return the internal record
ID along with each returned record.
Supported values:
The default value is 'false'.
- 'expression': Optional filter
expression to apply to the table. The default
value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
get_records_from_collection_request(request, callback) → {Promise}
Retrieves records from a collection. The operation can optionally
return the record IDs which can be used in certain queries such as
GPUdb#delete_records
.
This operation supports paging through the data via the offset
and
limit
parameters.
Note that when using the Java API, it is not possible to retrieve records
from
join views using this operation.
(DEPRECATED)
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
get_records_json(table_name, column_names, offset, limit, expression, orderby_columns, having_clause, callback)
This method is used to retrieve records from a Kinetica table in the form of
a JSON array (stringified). The only mandatory parameter is the 'tableName'.
The rest are all optional with suitable defaults wherever applicable.
Parameters:
Name |
Type |
Description |
table_name |
string
|
|
column_names |
array
|
|
offset |
int
|
|
limit |
int
|
|
expression |
string
|
|
orderby_columns |
array
|
|
having_clause |
string
|
|
callback |
object
|
|
- Source:
get_records_request(request, callback) → {Promise}
Retrieves records from a given table, optionally filtered by an
expression and/or sorted by a column. This operation can be performed on
tables
and views. Records can be returned encoded as binary, json, or geojson.
This operation supports paging through the data via the offset
and
limit
parameters. Note that when paging through a table, if
the table
(or the underlying table in case of a view) is updated (records are
inserted,
deleted or modified) the records retrieved may differ between calls based on
the
updates applied.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission(principal, object, object_type, permission, options, callback) → {Promise}
Grant user or role the specified permission on the specified object.
Parameters:
Name |
Type |
Description |
principal |
String
|
Name of the user or role for which the permission
is being granted. Must be an existing user or
role. |
object |
String
|
Name of object permission is being granted to. It
is recommended to use a fully-qualified name when
possible. |
object_type |
String
|
The type of object being granted to
Supported values:
- 'context': Context
- 'credential': Credential
- 'datasink': Data Sink
- 'datasource': Data Source
- 'directory': KIFS File Directory
- 'graph': A Graph object
- 'proc': UDF Procedure
- 'schema': Schema
- 'sql_proc': SQL Procedure
- 'system': System-level access
- 'table': Database Table
- 'table_monitor': Table monitor
|
permission |
String
|
Permission being granted.
Supported values:
- 'admin': Full read/write and
administrative access on the object.
- 'connect': Connect access on the
given data source or data sink.
- 'delete': Delete rows from tables.
- 'execute': Ability to Execute the
Procedure object.
- 'insert': Insert access to tables.
- 'read': Ability to read, list and
use the object.
- 'update': Update access to the
table.
- 'user_admin': Access to administer
users and roles that do not have system_admin
permission.
- 'write': Access to write, change
and delete objects.
|
options |
Object
|
Optional parameters.
- 'columns': Apply table security to
these columns, comma-separated. The default value
is ''.
- 'filter_expression': Optional filter
expression to apply to this grant. Only rows that
match the filter will be affected. The default
value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_credential(name, permission, credential_name, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role to which the permission will
be granted. Must be an existing user or role. |
permission |
String
|
Permission to grant to the user or role.
Supported values:
- 'credential_admin': Full read/write
and administrative access on the credential.
- 'credential_read': Ability to read
and use the credential.
|
credential_name |
String
|
Name of the credential on which the
permission will be granted. Must be an
existing credential, or an empty string to
grant access on all credentials. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_credential_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_datasource(name, permission, datasource_name, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role to which the permission will
be granted. Must be an existing user or role. |
permission |
String
|
Permission to grant to the user or role
Supported values:
- 'admin': Admin access on the given
data source
- 'connect': Connect access on the
given data source
|
datasource_name |
String
|
Name of the data source on which the
permission will be granted. Must be an
existing data source, or an empty string to
grant permission on all data sources. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_datasource_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_directory(name, permission, directory_name, options, callback) → {Promise}
Grants a
KiFS
directory-level permission to a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role to which the permission will
be granted. Must be an existing user or role. |
permission |
String
|
Permission to grant to the user or role.
Supported values:
- 'directory_read': For files in the
directory, access to list files, download files,
or use files in server side functions
- 'directory_write': Access to upload
files to, or delete files from, the directory. A
user or role with write access automatically has
read access
|
directory_name |
String
|
Name of the KiFS directory to which the
permission grants access. An empty directory
name grants access to all KiFS directories |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_directory_request(request, callback) → {Promise}
Grants a
KiFS
directory-level permission to a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_proc(name, permission, proc_name, options, callback) → {Promise}
Grants a proc-level permission to a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role to which the permission will
be granted. Must be an existing user or role. |
permission |
String
|
Permission to grant to the user or role.
Supported values:
- 'proc_admin': Admin access to the
proc.
- 'proc_execute': Execute access to
the proc.
|
proc_name |
String
|
Name of the proc to which the permission grants
access. Must be an existing proc, or an empty
string to grant access to all procs. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_proc_request(request, callback) → {Promise}
Grants a proc-level permission to a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_request(request, callback) → {Promise}
Grant user or role the specified permission on the specified object.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_system(name, permission, options, callback) → {Promise}
Grants a system-level permission to a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role to which the permission will
be granted. Must be an existing user or role. |
permission |
String
|
Permission to grant to the user or role.
Supported values:
- 'system_admin': Full access to all
data and system functions.
- 'system_user_admin': Access to
administer users and roles that do not have
system_admin permission.
- 'system_write': Read and write
access to all tables.
- 'system_read': Read-only access to
all tables.
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_system_request(request, callback) → {Promise}
Grants a system-level permission to a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_table(name, permission, table_name, filter_expression, options, callback) → {Promise}
Grants a table-level permission to a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role to which the permission will
be granted. Must be an existing user or role. |
permission |
String
|
Permission to grant to the user or role.
Supported values:
- 'table_admin': Full read/write and
administrative access to the table.
- 'table_insert': Insert access to
the table.
- 'table_update': Update access to
the table.
- 'table_delete': Delete access to
the table.
- 'table_read': Read access to the
table.
|
table_name |
String
|
Name of the table to which the permission grants
access, in [schema_name.]table_name format,
using standard name resolution rules. Must
be an existing table, view, or schema. If a
schema, the permission also applies to tables
and views in the schema. |
filter_expression |
String
|
Optional filter expression to apply to
this grant. Only rows that match the
filter will be affected. |
options |
Object
|
Optional parameters.
- 'columns': Apply security to these
columns, comma-separated. The default value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_permission_table_request(request, callback) → {Promise}
Grants a table-level permission to a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_role(role, member, options, callback) → {Promise}
Grants membership in a role to a user or role.
Parameters:
Name |
Type |
Description |
role |
String
|
Name of the role in which membership will be granted.
Must be an existing role. |
member |
String
|
Name of the user or role that will be granted
membership in role . Must be an existing
user or role. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
grant_role_request(request, callback) → {Promise}
Grants membership in a role to a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_permission(principal, object, object_type, permission, options, callback) → {Promise}
Checks if the specified user has the specified permission on the specified
object.
Parameters:
Name |
Type |
Description |
principal |
String
|
Name of the user for which the permission is
being checked. Must be an existing user. If
blank, will use the current user. |
object |
String
|
Name of object to check for the requested
permission. It is recommended to use a
fully-qualified name when possible. |
object_type |
String
|
The type of object being checked
Supported values:
- 'context': Context
- 'credential': Credential
- 'datasink': Data Sink
- 'datasource': Data Source
- 'directory': KiFS File Directory
- 'graph': A Graph object
- 'proc': UDF Procedure
- 'schema': Schema
- 'sql_proc': SQL Procedure
- 'system': System-level access
- 'table': Database Table
- 'table_monitor': Table monitor
|
permission |
String
|
Permission to check for.
Supported values:
- 'admin': Full read/write and
administrative access on the object.
- 'connect': Connect access on the
given data source or data sink.
- 'delete': Delete rows from tables.
- 'execute': Ability to Execute the
Procedure object.
- 'insert': Insert access to tables.
- 'read': Ability to read, list and
use the object.
- 'update': Update access to the
table.
- 'user_admin': Access to administer
users and roles that do not have system_admin
permission.
- 'write': Access to write, change
and delete objects.
|
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
false will return an error if the
provided object does not exist or is
blank. If true then it will return
false for has_permission .
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_permission_request(request, callback) → {Promise}
Checks if the specified user has the specified permission on the specified
object.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_proc(proc_name, options, callback) → {Promise}
Checks the existence of a proc with the given name.
Parameters:
Name |
Type |
Description |
proc_name |
String
|
Name of the proc to check for existence. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_proc_request(request, callback) → {Promise}
Checks the existence of a proc with the given name.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_role(principal, role, options, callback) → {Promise}
Checks if the specified user has the specified role.
Parameters:
Name |
Type |
Description |
principal |
String
|
Name of the user for which role membersih is
being checked. Must be an existing user. If
blank, will use the current user. |
role |
String
|
Name of role to check for membership. |
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
false will return an error if the
provided role does not exist or is
blank. If true then it will return
false for has_role .
Supported values:
The default value is 'false'.
- 'only_direct': If
false
will search recursively if the
principal is a member of
role . If true then
principal must directly be a member of
role .
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_role_request(request, callback) → {Promise}
Checks if the specified user has the specified role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_schema(schema_name, options, callback) → {Promise}
Checks for the existence of a schema with the given name.
Parameters:
Name |
Type |
Description |
schema_name |
String
|
Name of the schema to check for existence, in
root, using standard name resolution rules. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_schema_request(request, callback) → {Promise}
Checks for the existence of a schema with the given name.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_table(table_name, options, callback) → {Promise}
Checks for the existence of a table with the given name.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to check for existence, in
[schema_name.]table_name format, using standard
name resolution rules. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_table_request(request, callback) → {Promise}
Checks for the existence of a table with the given name.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_type(type_id, options, callback) → {Promise}
Check for the existence of a type.
Parameters:
Name |
Type |
Description |
type_id |
String
|
Id of the type returned in response to
GPUdb#create_type request. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
has_type_request(request, callback) → {Promise}
Check for the existence of a type.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_records(table_name, data, options, callback) → {Promise}
Adds multiple records to the specified table. The operation is
synchronous, meaning that a response will not be returned until all the
records
are fully inserted and available. The response payload provides the counts
of
the number of records actually inserted and/or updated, and can provide the
unique identifier of each added record.
The options
parameter can be used to customize this function's
behavior.
The update_on_existing_pk
option specifies the record
collision policy for inserting into a table with a
primary
key, but is ignored if
no primary key exists.
The return_record_ids
option indicates that the
database should return the unique identifiers of inserted records.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of table to which the records are to be
added, in [schema_name.]table_name format, using
standard name resolution rules. Must
be an existing table. |
data |
Array.<Object>
|
An array of JSON encoded data for the records to be
added. All records must be of the same type as that
of the table. Empty array if
list_encoding is binary . |
options |
Object
|
Optional parameters.
- 'update_on_existing_pk': Specifies the
record collision policy for inserting into a table
with a primary key. If set to
true , any existing table record with
primary
key values that match those of a record being
inserted will be replaced by that new record (the
new
data will be "upserted"). If set to
false ,
any existing table record with primary key values
that match those of a record being inserted will
remain unchanged, while the new record will be
rejected and the error handled as determined by
ignore_existing_pk ,
allow_partial_batch , &
return_individual_errors . If the
specified table does not have a primary
key, then this option has no effect.
Supported values:
- 'true': Upsert new records when
primary keys match existing records
- 'false': Reject new records when
primary keys match existing records
The default value is 'false'.
- 'ignore_existing_pk': Specifies the
record collision error-suppression policy for
inserting into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
update_on_existing_pk is
false ). If set to
true , any record being inserted that
is rejected
for having primary key values that match those of
an existing table record will be ignored with no
error generated. If false , the
rejection of any
record for having primary key values matching an
existing record will result in an error being
reported, as determined by
allow_partial_batch &
return_individual_errors . If the
specified table does not
have a primary key or if upsert mode is in effect
(update_on_existing_pk is
true ), then this option has no effect.
Supported values:
- 'true': Ignore new records whose
primary key values collide with those of existing
records
- 'false': Treat as errors any new
records whose primary key values collide with those
of existing records
The default value is 'false'.
- 'return_record_ids': If
true then return the internal record
id along for each inserted record.
Supported values:
The default value is 'false'.
- 'truncate_strings': If set to
true , any strings which are too long
for their target charN string columns will be
truncated to fit.
Supported values:
The default value is 'false'.
- 'return_individual_errors': If set to
true , success will always be returned,
and any errors found will be included in the info
map. The "bad_record_indices" entry is a
comma-separated list of bad records (0-based). And
if so, there will also be an "error_N" entry for
each record with an error, where N is the index
(0-based).
Supported values:
The default value is 'false'.
- 'allow_partial_batch': If set to
true , all correct records will be
inserted and incorrect records will be rejected and
reported. Otherwise, the entire batch will be
rejected if any records are incorrect.
Supported values:
The default value is 'false'.
- 'dry_run': If set to
true , no data will be saved and any
errors will be returned.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_records_from_files(table_name, filepaths, modify_columns, create_table_options, options, callback) → {Promise}
Reads from one or more files and inserts the data into a new or existing
table.
The source data can be located either in
KiFS; on the cluster, accessible to
the database; or remotely, accessible via a pre-defined external
data source.
For delimited text files, there are two loading schemes: positional and
name-based. The name-based
loading scheme is enabled when the file has a header present and
text_has_header
is set to
true
. In this scheme, the source file(s) field names
must match the target table's column names exactly; however, the source file
can have more fields
than the target table has columns. If error_handling
is set to
permissive
, the source file can have fewer fields
than the target table has columns. If the name-based loading scheme is being
used, names matching
the file header's names may be provided to columns_to_load
instead of
numbers, but ranges are not supported.
Note: Due to data being loaded in parallel, there is no insertion order
guaranteed. For tables with
primary keys, in the case of a primary key collision, this means it is
indeterminate which record
will be inserted first and remain, while the rest of the colliding key
records are discarded.
Returns once all files are processed.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table into which the data will be
inserted, in
[schema_name.]table_name format, using standard
name resolution rules.
If the table does not exist, the table will be
created using either an existing
type_id or the type inferred from
the
file, and the new table name will have to meet
standard
table naming criteria. |
filepaths |
Array.<String>
|
A list of file paths from which data will be
sourced;
For paths in KiFS, use the uri prefix of
kifs:// followed by the path to
a file or directory. File matching by prefix is
supported, e.g. kifs://dir/file would match
dir/file_1
and dir/file_2. When prefix matching is used,
the path must start with a full, valid KiFS
directory name.
If an external data source is specified in
datasource_name , these file
paths must resolve to accessible files at that
data source location. Prefix matching is
supported.
If the data source is hdfs, prefixes must be
aligned with directories, i.e. partial file
names will
not match.
If no data source is specified, the files are
assumed to be local to the database and must
all be
accessible to the gpudb user, residing on the
path (or relative to the path) specified by the
external files directory in the Kinetica
configuration file. Wildcards
(*) can be used to
specify a group of files. Prefix matching is
supported, the prefixes must be aligned with
directories.
If the first path ends in .tsv, the text
delimiter will be defaulted to a tab character.
If the first path ends in .psv, the text
delimiter will be defaulted to a pipe character
(|). |
modify_columns |
Object
|
Not implemented yet |
create_table_options |
Object
|
Options from
GPUdb#create_table ,
allowing the structure of the table to
be defined independently of the data
source, when creating the target table
- 'type_id': ID of a
currently registered type.
- 'no_error_if_exists': If
true ,
prevents an error from occurring if
the table already exists and is of the
given type. If a table with
the same name but a different type
exists, it is still an error.
Supported values:
The default value is 'false'.
- 'is_replicated': Affects
the distribution scheme
for the table's data. If
true and the
given table has no explicit shard key defined,
the
table will be replicated. If
false , the table will be
sharded according to
the shard key specified in the
given type_id , or
randomly sharded, if
no shard key is specified.
Note that a type containing a shard
key cannot be used to create a
replicated table.
Supported values:
The default value is 'false'.
- 'foreign_keys':
Semicolon-separated list of
foreign keys, of the
format
'(source_column_name [, ...])
references
target_table_name(primary_key_column_name
[, ...]) [as foreign_key_name]'.
- 'foreign_shard_key':
Foreign shard key of the format
'source_column references
shard_by_column from
target_table(primary_key_column)'.
- 'partition_type': Partitioning scheme
to use.
Supported values:
- 'partition_keys':
Comma-separated list of partition
keys, which are the columns or
column expressions by which records
will be assigned to partitions defined
by
partition_definitions .
- 'partition_definitions':
Comma-separated list of partition
definitions, whose format depends
on the choice of
partition_type . See
range partitioning,
interval
partitioning,
list partitioning,
hash partitioning,
or
series partitioning
for example formats.
- 'is_automatic_partition':
If
true ,
a new partition will be created for
values which don't fall into an
existing partition. Currently,
only supported for list partitions.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL of the table
specified in
table_name .
- 'chunk_size': Indicates
the number of records per chunk to be
used for this table.
- 'is_result_table':
Indicates whether the table is a
memory-only table. A
result table cannot contain columns
with
store_only or text_search data-handling or
that are
non-charN strings,
and it will not be retained if the
server is restarted.
Supported values:
The default value is 'false'.
- 'strategy_definition':
The tier strategy
for the table and its columns.
|
options |
Object
|
Optional parameters.
- 'bad_record_table_name': Name of a
table to which records that were rejected are
written.
The bad-record-table has the following columns:
line_number (long), line_rejected (string),
error_message (string). When
error_handling is
abort , bad records table is not
populated.
- 'bad_record_table_limit': A positive
integer indicating the maximum number of records
that can be
written to the bad-record-table. The default value
is '10000'.
- 'bad_record_table_limit_per_input':
For subscriptions, a positive integer indicating
the maximum number
of records that can be written to the
bad-record-table per file/payload. Default value
will be
bad_record_table_limit and total size
of the table per rank is limited to
bad_record_table_limit .
- 'batch_size': Number of records to
insert per batch when inserting data. The default
value is '50000'.
- 'column_formats': For each target
column specified, applies the column-property-bound
format to the source data loaded into that column.
Each column format will contain a mapping of one
or more of its column properties to an appropriate
format for each property. Currently supported
column properties include date, time, & datetime.
The parameter value must be formatted as a JSON
string of maps of column names to maps of column
properties to their corresponding column formats,
e.g.,
'{ "order_date" : { "date" : "%Y.%m.%d" },
"order_time" : { "time" : "%H:%M:%S" } }'.
See
default_column_formats for valid
format syntax.
- 'columns_to_load': Specifies a
comma-delimited list of columns from the source
data to
load. If more than one file is being loaded, this
list applies to all files.
Column numbers can be specified discretely or as a
range. For example, a value of '5,7,1..3' will
insert values from the fifth column in the source
data into the first column in the target table,
from the seventh column in the source data into the
second column in the target table, and from the
first through third columns in the source data into
the third through fifth columns in the target
table.
If the source data contains a header, column names
matching the file header names may be provided
instead of column numbers. If the target table
doesn't exist, the table will be created with the
columns in this order. If the target table does
exist with columns in a different order than the
source data, this list can be used to match the
order of the target table. For example, a value of
'C, B, A' will create a three column table with
column C, followed by column B, followed by column
A; or will insert those fields in that order into a
table created with columns in that order. If
the target table exists, the column names must
match the source data field names for a
name-mapping
to be successful.
Mutually exclusive with
columns_to_skip .
- 'columns_to_skip': Specifies a
comma-delimited list of columns from the source
data to
skip. Mutually exclusive with
columns_to_load .
- 'compression_type': Source data
compression type
Supported values:
- 'none': No compression.
- 'auto': Auto detect compression type
- 'gzip': gzip file compression.
- 'bzip2': bzip2 file compression.
The default value is 'auto'.
- 'datasource_name': Name of an existing
external data source from which data file(s)
specified in
filepaths will be loaded
- 'default_column_formats': Specifies
the default format to be applied to source data
loaded
into columns with the corresponding column
property. Currently supported column properties
include
date, time, & datetime. This default
column-property-bound format can be overridden by
specifying a
column property & format for a given target column
in
column_formats . For
each specified annotation, the format will apply to
all columns with that annotation unless a custom
column_formats for that annotation is
specified.
The parameter value must be formatted as a JSON
string that is a map of column properties to their
respective column formats, e.g., '{ "date" :
"%Y.%m.%d", "time" : "%H:%M:%S" }'. Column
formats are specified as a string of control
characters and plain text. The supported control
characters are 'Y', 'm', 'd', 'H', 'M', 'S', and
's', which follow the Linux 'strptime()'
specification, as well as 's', which specifies
seconds and fractional seconds (though the
fractional
component will be truncated past milliseconds).
Formats for the 'date' annotation must include the
'Y', 'm', and 'd' control characters. Formats for
the 'time' annotation must include the 'H', 'M',
and either 'S' or 's' (but not both) control
characters. Formats for the 'datetime' annotation
meet both the 'date' and 'time' control character
requirements. For example, '{"datetime" : "%m/%d/%Y
%H:%M:%S" }' would be used to interpret text
as "05/04/2000 12:12:11"
- 'error_handling': Specifies how errors
should be handled upon insertion.
Supported values:
- 'permissive': Records with missing
columns are populated with nulls if possible;
otherwise, the malformed records are skipped.
- 'ignore_bad_records': Malformed
records are skipped.
- 'abort': Stops current insertion and
aborts entire operation when an error is
encountered. Primary key collisions are considered
abortable errors in this mode.
The default value is 'abort'.
- 'file_type': Specifies the type of the
file(s) whose records will be inserted.
Supported values:
- 'avro': Avro file format
- 'delimited_text': Delimited text file
format; e.g., CSV, TSV, PSV, etc.
- 'gdb': Esri/GDB file format
- 'json': Json file format
- 'parquet': Apache Parquet file format
- 'shapefile': ShapeFile file format
The default value is 'delimited_text'.
- 'gdal_configuration_options': Comma
separated list of gdal conf options, for the
specific requets: key=value
- 'ignore_existing_pk': Specifies the
record collision error-suppression policy for
inserting into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
update_on_existing_pk is
false ). If set to
true , any record being inserted that
is rejected
for having primary key values that match those of
an existing table record will be ignored with no
error generated. If false , the
rejection of any
record for having primary key values matching an
existing record will result in an error being
reported, as determined by
error_handling . If the specified
table does not
have a primary key or if upsert mode is in effect
(update_on_existing_pk is
true ), then this option has no effect.
Supported values:
- 'true': Ignore new records whose
primary key values collide with those of existing
records
- 'false': Treat as errors any new
records whose primary key values collide with those
of existing records
The default value is 'false'.
- 'ingestion_mode': Whether to do a full
load, dry run, or perform a type inference on the
source data.
Supported values:
- 'full': Run a type inference on the
source data (if needed) and ingest
- 'dry_run': Does not load data, but
walks through the source data and determines the
number of valid records, taking into account the
current mode of
error_handling .
- 'type_inference_only': Infer the type
of the source data and return, without ingesting
any data. The inferred type is returned in the
response.
The default value is 'full'.
- 'kafka_consumers_per_rank': Number of
Kafka consumer threads per rank (valid range 1-6).
The default value is '1'.
- 'kafka_group_id': The group id to be
used when consuming data from a Kafka topic (valid
only for Kafka datasource subscriptions).
- 'kafka_offset_reset_policy': Policy to
determine whether the Kafka data consumption starts
either at earliest offset or latest offset.
Supported values:
The default value is 'earliest'.
- 'kafka_optimistic_ingest': Enable
optimistic ingestion where Kafka topic offsets and
table data are committed independently to achieve
parallelism.
Supported values:
The default value is 'false'.
- 'kafka_subscription_cancel_after':
Sets the Kafka subscription lifespan (in minutes).
Expired subscription will be cancelled
automatically.
- 'kafka_type_inference_fetch_timeout':
Maximum time to collect Kafka messages before type
inferencing on the set of them.
- 'layer': Geo files layer(s) name(s):
comma separated.
- 'loading_mode': Scheme for
distributing the extraction and loading of data
from the source data file(s). This option applies
only when loading files that are local to the
database
Supported values:
- 'head': The head node loads all data.
All files must be available to the head node.
- 'distributed_shared': The head node
coordinates loading data by worker
processes across all nodes from shared files
available to all workers.
NOTE:
Instead of existing on a shared source, the files
can be duplicated on a source local to each host
to improve performance, though the files must
appear as the same data set from the perspective of
all hosts performing the load.
- 'distributed_local': A single worker
process on each node loads all files
that are available to it. This option works best
when each worker loads files from its own file
system, to maximize performance. In order to avoid
data duplication, either each worker performing
the load needs to have visibility to a set of files
unique to it (no file is visible to more than
one node) or the target table needs to have a
primary key (which will allow the worker to
automatically deduplicate data).
NOTE:
If the target table doesn't exist, the table
structure will be determined by the head node. If
the
head node has no files local to it, it will be
unable to determine the structure and the request
will fail.
If the head node is configured to have no worker
processes, no data strictly accessible to the head
node will be loaded.
The default value is 'head'.
- 'local_time_offset': Apply an offset
to Avro local timestamp columns.
- 'max_records_to_load': Limit the
number of records to load in this request: if this
number
is larger than
batch_size , then the
number of records loaded will be
limited to the next whole number of
batch_size (per working thread).
- 'num_tasks_per_rank': Number of tasks
for reading file per rank. Default will be system
configuration parameter,
external_file_reader_num_tasks.
- 'poll_interval': If
true ,
the number of
seconds between attempts to load external files
into the table. If zero, polling will be
continuous
as long as data is found. If no data is found, the
interval will steadily increase to a maximum of
60 seconds. The default value is '0'.
- 'primary_keys': Comma separated list
of column names to set as primary keys, when not
specified in the type.
- 'schema_registry_schema_name': Name of
the Avro schema in the schema registry to use when
reading Avro records.
- 'shard_keys': Comma separated list of
column names to set as shard keys, when not
specified in the type.
- 'skip_lines': Skip number of lines
from begining of file.
- 'subscribe': Continuously poll the
data source to check for new data and load it into
the table.
Supported values:
The default value is 'false'.
- 'table_insert_mode': Insertion scheme
to use when inserting records from multiple
shapefiles.
Supported values:
- 'single': Insert all records into a
single table.
- 'table_per_file': Insert records from
each file into a new table corresponding to that
file.
The default value is 'single'.
- 'text_comment_string': Specifies the
character string that should be interpreted as a
comment line
prefix in the source data. All lines in the data
starting with the provided string are ignored.
For
delimited_text
file_type only. The default value is
'#'.
- 'text_delimiter': Specifies the
character delimiting field values in the source
data
and field names in the header (if present).
For
delimited_text
file_type only. The default value is
','.
- 'text_escape_character': Specifies the
character that is used to escape other characters
in
the source data.
An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by
an escape character will be interpreted as the
ASCII bell, backspace, form feed, line feed,
carriage return, horizontal tab, & vertical tab,
respectively. For example, the escape character
followed by an 'n' will be interpreted as a newline
within a field value.
The escape character can also be used to escape the
quoting character, and will be treated as an
escape character whether it is within a quoted
field value or not.
For
delimited_text
file_type only.
- 'text_has_header': Indicates whether
the source data contains a header row.
For
delimited_text
file_type only.
Supported values:
The default value is 'true'.
- 'text_header_property_delimiter':
Specifies the delimiter for
column properties in the header
row (if
present). Cannot be set to same value as
text_delimiter .
For delimited_text
file_type only. The default value is
'|'.
- 'text_null_string': Specifies the
character string that should be interpreted as a
null
value in the source data.
For
delimited_text
file_type only. The default value is
'\\N'.
- 'text_quote_character': Specifies the
character that should be interpreted as a field
value
quoting character in the source data. The
character must appear at beginning and end of field
value
to take effect. Delimiters within quoted fields
are treated as literals and not delimiters. Within
a quoted field, two consecutive quote characters
will be interpreted as a single literal quote
character, effectively escaping it. To not have a
quote character, specify an empty string.
For
delimited_text
file_type only. The default value is
'"'.
- 'text_search_columns': Add
'text_search' property to internally inferenced
string columns.
Comma seperated list of column names or '*' for all
columns. To add 'text_search' property only to
string columns greater than or equal to a minimum
size, also set the
text_search_min_column_length
- 'text_search_min_column_length': Set
the minimum column size for strings to apply the
'text_search' property to. Used only when
text_search_columns has a value.
- 'truncate_strings': If set to
true , truncate string values that are
longer than the column's type size.
Supported values:
The default value is 'false'.
- 'truncate_table': If set to
true , truncates the table specified by
table_name prior to loading the
file(s).
Supported values:
The default value is 'false'.
- 'type_inference_mode': Optimize type
inferencing for either speed or accuracy.
Supported values:
- 'accuracy': Scans data to get
exactly-typed & sized columns for all data scanned.
- 'speed': Scans data and picks the
widest possible column types so that 'all' values
will fit with minimum data scanned
The default value is 'speed'.
- 'update_on_existing_pk': Specifies the
record collision policy for inserting into a table
with a primary key. If set to
true , any existing table record with
primary
key values that match those of a record being
inserted will be replaced by that new record (the
new
data will be 'upserted'). If set to
false ,
any existing table record with primary key values
that match those of a record being inserted will
remain unchanged, while the new record will be
rejected and the error handled as determined by
ignore_existing_pk &
error_handling . If the
specified table does not have a primary key, then
this option has no effect.
Supported values:
- 'true': Upsert new records when
primary keys match existing records
- 'false': Reject new records when
primary keys match existing records
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_records_from_files_request(request, callback) → {Promise}
Reads from one or more files and inserts the data into a new or existing
table.
The source data can be located either in
KiFS; on the cluster, accessible to
the database; or remotely, accessible via a pre-defined external
data source.
For delimited text files, there are two loading schemes: positional and
name-based. The name-based
loading scheme is enabled when the file has a header present and
text_has_header
is set to
true
. In this scheme, the source file(s) field names
must match the target table's column names exactly; however, the source file
can have more fields
than the target table has columns. If error_handling
is set to
permissive
, the source file can have fewer fields
than the target table has columns. If the name-based loading scheme is being
used, names matching
the file header's names may be provided to columns_to_load
instead of
numbers, but ranges are not supported.
Note: Due to data being loaded in parallel, there is no insertion order
guaranteed. For tables with
primary keys, in the case of a primary key collision, this means it is
indeterminate which record
will be inserted first and remain, while the rest of the colliding key
records are discarded.
Returns once all files are processed.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_records_from_json(records, table_name, create_table_options, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
records |
Object
|
Either a single JSON record or an array of JSON records, as either a JSON string or a native map/array type |
table_name |
string
|
The name of the table to insert into |
create_table_options |
Object
|
the same 'create_table_options' that apply to the '/insert/records/frompayload' endpoint |
options |
Object
|
the 'options' that apply to the '/insert/records/frompayload' endpoint |
callback |
function
|
an optional callback method that receives the results |
- Source:
Returns:
A promise that will be fulfilled with the 'data' (containing insertion
results like counts etc) object, if no callback function is provided.
-
Type
-
Promise
insert_records_from_payload(table_name, data_text, data_bytes, modify_columns, create_table_options, options, callback) → {Promise}
Reads from the given text-based or binary payload and inserts the
data into a new or existing table. The table will be created if it doesn't
already exist.
Returns once all records are processed.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table into which the data will be
inserted, in
[schema_name.]table_name format, using standard
name resolution rules.
If the table does not exist, the table will be
created using either an existing
type_id or the type inferred from
the
payload, and the new table name will have to
meet standard
table naming criteria. |
data_text |
String
|
Records formatted as delimited text |
data_bytes |
String
|
Records formatted as binary data |
modify_columns |
Object
|
Not implemented yet |
create_table_options |
Object
|
Options used when creating the target
table. Includes type to use. The other
options match those in
GPUdb#create_table
- 'type_id': ID of a
currently registered type. The default
value is ''.
- 'no_error_if_exists': If
true , prevents an error
from occurring if the table already
exists and is of the given type. If a
table with the same ID but a different
type exists, it is still an error.
Supported values:
The default value is 'false'.
- 'is_replicated': Affects
the distribution scheme
for the table's data. If
true and the given type
has no explicit shard key defined,
the table will be replicated. If
false , the table will be
sharded according to
the shard key specified in the given
type_id , or randomly sharded, if
no shard key is specified. Note that
a type containing a shard key cannot
be used to create a replicated table.
Supported values:
The default value is 'false'.
- 'foreign_keys':
Semicolon-separated list of foreign keys, of the
format '(source_column_name [, ...])
references
target_table_name(primary_key_column_name
[, ...]) [as foreign_key_name]'.
- 'foreign_shard_key':
Foreign shard key of the format
'source_column references
shard_by_column from
target_table(primary_key_column)'.
- 'partition_type': Partitioning scheme
to use.
Supported values:
- 'partition_keys':
Comma-separated list of partition
keys, which are the columns or column
expressions by which records will be
assigned to partitions defined by
partition_definitions .
- 'partition_definitions':
Comma-separated list of partition
definitions, whose format depends on
the choice of
partition_type . See range partitioning,
interval
partitioning, list partitioning,
hash partitioning,
or series partitioning
for example formats.
- 'is_automatic_partition':
If
true , a new partition
will be created for values which don't
fall into an existing partition.
Currently only supported for list partitions.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL of the table
specified in
table_name .
- 'chunk_size': Indicates
the number of records per chunk to be
used for this table.
- 'is_result_table':
Indicates whether the table is a memory-only table. A
result table cannot contain columns
with store_only or text_search data-handling or
that are non-charN strings,
and it will not be retained if the
server is restarted.
Supported values:
The default value is 'false'.
- 'strategy_definition':
The tier strategy for
the table and its columns.
|
options |
Object
|
Optional parameters.
- 'avro_header_bytes': Optional number
of bytes to skip when reading an avro record.
- 'avro_num_records': Optional number of
avro records, if data includes only records.
- 'avro_schema': Optional string
representing avro schema, for insert records in
avro format, that does not include is schema.
- 'avro_schemaless': When user provides
'avro_schema', avro data is assumed to be
schemaless, unless specified. Default is 'true'
when given avro_schema. Igonred when avro_schema is
not given.
Supported values:
- 'bad_record_table_name': Optional name
of a table to which records that were rejected are
written. The bad-record-table has the following
columns: line_number (long), line_rejected
(string), error_message (string).
- 'bad_record_table_limit': A positive
integer indicating the maximum number of records
that can be written to the bad-record-table.
Default value is 10000
- 'bad_record_table_limit_per_input':
For subscriptions: A positive integer indicating
the maximum number of records that can be written
to the bad-record-table per file/payload. Default
value will be 'bad_record_table_limit' and total
size of the table per rank is limited to
'bad_record_table_limit'
- 'batch_size': Internal tuning
parameter--number of records per batch when
inserting data.
- 'column_formats': For each target
column specified, applies the column-property-bound
format to the source data
loaded into that column. Each column format will
contain a mapping of one or more of its column
properties to an appropriate format for each
property. Currently supported column properties
include date, time, & datetime. The parameter value
must be formatted as a JSON string of maps of
column names to maps of column properties to their
corresponding column formats, e.g.,
'{ "order_date" : { "date" : "%Y.%m.%d" },
"order_time" : { "time" : "%H:%M:%S" } }'.
See
default_column_formats for valid
format syntax.
- 'columns_to_load': Specifies a
comma-delimited list of columns from the source
data to
load. If more than one file is being loaded, this
list applies to all files.
Column numbers can be specified discretely or as a
range. For example, a value of '5,7,1..3' will
insert values from the fifth column in the source
data into the first column in the target table,
from the seventh column in the source data into the
second column in the target table, and from the
first through third columns in the source data into
the third through fifth columns in the target
table.
If the source data contains a header, column names
matching the file header names may be provided
instead of column numbers. If the target table
doesn't exist, the table will be created with the
columns in this order. If the target table does
exist with columns in a different order than the
source data, this list can be used to match the
order of the target table. For example, a value of
'C, B, A' will create a three column table with
column C, followed by column B, followed by column
A; or will insert those fields in that order into a
table created with columns in that order. If
the target table exists, the column names must
match the source data field names for a
name-mapping
to be successful.
Mutually exclusive with
columns_to_skip .
- 'columns_to_skip': Specifies a
comma-delimited list of columns from the source
data to
skip. Mutually exclusive with
columns_to_load .
- 'compression_type': Optional: payload
compression type
Supported values:
- 'none': Uncompressed
- 'auto': Default. Auto detect
compression type
- 'gzip': gzip file compression.
- 'bzip2': bzip2 file compression.
The default value is 'auto'.
- 'default_column_formats': Specifies
the default format to be applied to source data
loaded
into columns with the corresponding column
property. Currently supported column properties
include
date, time, & datetime. This default
column-property-bound format can be overridden by
specifying a
column property & format for a given target column
in
column_formats . For
each specified annotation, the format will apply to
all columns with that annotation unless a custom
column_formats for that annotation is
specified.
The parameter value must be formatted as a JSON
string that is a map of column properties to their
respective column formats, e.g., '{ "date" :
"%Y.%m.%d", "time" : "%H:%M:%S" }'. Column
formats are specified as a string of control
characters and plain text. The supported control
characters are 'Y', 'm', 'd', 'H', 'M', 'S', and
's', which follow the Linux 'strptime()'
specification, as well as 's', which specifies
seconds and fractional seconds (though the
fractional
component will be truncated past milliseconds).
Formats for the 'date' annotation must include the
'Y', 'm', and 'd' control characters. Formats for
the 'time' annotation must include the 'H', 'M',
and either 'S' or 's' (but not both) control
characters. Formats for the 'datetime' annotation
meet both the 'date' and 'time' control character
requirements. For example, '{"datetime" : "%m/%d/%Y
%H:%M:%S" }' would be used to interpret text
as "05/04/2000 12:12:11"
- 'error_handling': Specifies how errors
should be handled upon insertion.
Supported values:
- 'permissive': Records with missing
columns are populated with nulls if possible;
otherwise, the malformed records are skipped.
- 'ignore_bad_records': Malformed
records are skipped.
- 'abort': Stops current insertion and
aborts entire operation when an error is
encountered. Primary key collisions are considered
abortable errors in this mode.
The default value is 'abort'.
- 'file_type': Specifies the type of the
file(s) whose records will be inserted.
Supported values:
- 'avro': Avro file format
- 'delimited_text': Delimited text file
format; e.g., CSV, TSV, PSV, etc.
- 'gdb': Esri/GDB file format
- 'json': Json file format
- 'parquet': Apache Parquet file format
- 'shapefile': ShapeFile file format
The default value is 'delimited_text'.
- 'gdal_configuration_options': Comma
separated list of gdal conf options, for the
specific requets: key=value. The default value is
''.
- 'ignore_existing_pk': Specifies the
record collision error-suppression policy for
inserting into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
update_on_existing_pk is
false ). If set to
true , any record being inserted that
is rejected
for having primary key values that match those of
an existing table record will be ignored with no
error generated. If false , the
rejection of any
record for having primary key values matching an
existing record will result in an error being
reported, as determined by
error_handling . If the specified
table does not
have a primary key or if upsert mode is in effect
(update_on_existing_pk is
true ), then this option has no effect.
Supported values:
- 'true': Ignore new records whose
primary key values collide with those of existing
records
- 'false': Treat as errors any new
records whose primary key values collide with those
of existing records
The default value is 'false'.
- 'ingestion_mode': Whether to do a full
load, dry run, or perform a type inference on the
source data.
Supported values:
- 'full': Run a type inference on the
source data (if needed) and ingest
- 'dry_run': Does not load data, but
walks through the source data and determines the
number of valid records, taking into account the
current mode of
error_handling .
- 'type_inference_only': Infer the type
of the source data and return, without ingesting
any data. The inferred type is returned in the
response.
The default value is 'full'.
- 'layer': Optional: geo files layer(s)
name(s): comma separated. The default value is ''.
- 'loading_mode': Scheme for
distributing the extraction and loading of data
from the source data file(s). This option applies
only when loading files that are local to the
database
Supported values:
- 'head': The head node loads all data.
All files must be available to the head node.
- 'distributed_shared': The head node
coordinates loading data by worker
processes across all nodes from shared files
available to all workers.
NOTE:
Instead of existing on a shared source, the files
can be duplicated on a source local to each host
to improve performance, though the files must
appear as the same data set from the perspective of
all hosts performing the load.
- 'distributed_local': A single worker
process on each node loads all files
that are available to it. This option works best
when each worker loads files from its own file
system, to maximize performance. In order to avoid
data duplication, either each worker performing
the load needs to have visibility to a set of files
unique to it (no file is visible to more than
one node) or the target table needs to have a
primary key (which will allow the worker to
automatically deduplicate data).
NOTE:
If the target table doesn't exist, the table
structure will be determined by the head node. If
the
head node has no files local to it, it will be
unable to determine the structure and the request
will fail.
If the head node is configured to have no worker
processes, no data strictly accessible to the head
node will be loaded.
The default value is 'head'.
- 'local_time_offset': For Avro local
timestamp columns
- 'max_records_to_load': Limit the
number of records to load in this request: If this
number is larger than a batch_size, then the number
of records loaded will be limited to the next whole
number of batch_size (per working thread). The
default value is ''.
- 'num_tasks_per_rank': Optional: number
of tasks for reading file per rank. Default will be
external_file_reader_num_tasks
- 'poll_interval': If
true ,
the number of seconds between attempts to load
external files into the table. If zero, polling
will be continuous as long as data is found. If no
data is found, the interval will steadily increase
to a maximum of 60 seconds.
- 'primary_keys': Optional: comma
separated list of column names, to set as primary
keys, when not specified in the type. The default
value is ''.
- 'schema_registry_schema_id':
- 'schema_registry_schema_name':
- 'schema_registry_schema_version':
- 'shard_keys': Optional: comma
separated list of column names, to set as primary
keys, when not specified in the type. The default
value is ''.
- 'skip_lines': Skip number of lines
from begining of file.
- 'subscribe': Continuously poll the
data source to check for new data and load it into
the table.
Supported values:
The default value is 'false'.
- 'table_insert_mode': Optional:
table_insert_mode. When inserting records from
multiple files: if table_per_file then insert from
each file into a new table. Currently supported
only for shapefiles.
Supported values:
- 'single'
- 'table_per_file'
The default value is 'single'.
- 'text_comment_string': Specifies the
character string that should be interpreted as a
comment line
prefix in the source data. All lines in the data
starting with the provided string are ignored.
For
delimited_text
file_type only. The default value is
'#'.
- 'text_delimiter': Specifies the
character delimiting field values in the source
data
and field names in the header (if present).
For
delimited_text
file_type only. The default value is
','.
- 'text_escape_character': Specifies the
character that is used to escape other characters
in
the source data.
An 'a', 'b', 'f', 'n', 'r', 't', or 'v' preceded by
an escape character will be interpreted as the
ASCII bell, backspace, form feed, line feed,
carriage return, horizontal tab, & vertical tab,
respectively. For example, the escape character
followed by an 'n' will be interpreted as a newline
within a field value.
The escape character can also be used to escape the
quoting character, and will be treated as an
escape character whether it is within a quoted
field value or not.
For
delimited_text
file_type only.
- 'text_has_header': Indicates whether
the source data contains a header row.
For
delimited_text
file_type only.
Supported values:
The default value is 'true'.
- 'text_header_property_delimiter':
Specifies the delimiter for
column properties in the header
row (if
present). Cannot be set to same value as
text_delimiter .
For delimited_text
file_type only. The default value is
'|'.
- 'text_null_string': Specifies the
character string that should be interpreted as a
null
value in the source data.
For
delimited_text
file_type only. The default value is
'\\N'.
- 'text_quote_character': Specifies the
character that should be interpreted as a field
value
quoting character in the source data. The
character must appear at beginning and end of field
value
to take effect. Delimiters within quoted fields
are treated as literals and not delimiters. Within
a quoted field, two consecutive quote characters
will be interpreted as a single literal quote
character, effectively escaping it. To not have a
quote character, specify an empty string.
For
delimited_text
file_type only. The default value is
'"'.
- 'text_search_columns': Add
'text_search' property to internally inferenced
string columns. Comma seperated list of column
names or '*' for all columns. To add text_search
property only to string columns of minimum size,
set also the option 'text_search_min_column_length'
- 'text_search_min_column_length': Set
minimum column size. Used only when
'text_search_columns' has a value.
- 'truncate_strings': If set to
true , truncate string values that are
longer than the column's type size.
Supported values:
The default value is 'false'.
- 'truncate_table': If set to
true , truncates the table specified by
table_name prior to loading the
file(s).
Supported values:
The default value is 'false'.
- 'type_inference_mode': optimize type
inference for:
Supported values:
- 'accuracy': Scans data to get
exactly-typed & sized columns for all data scanned.
- 'speed': Scans data and picks the
widest possible column types so that 'all' values
will fit with minimum data scanned
The default value is 'speed'.
- 'update_on_existing_pk': Specifies the
record collision policy for inserting into a table
with a primary key. If set to
true , any existing table record with
primary
key values that match those of a record being
inserted will be replaced by that new record (the
new
data will be "upserted"). If set to
false ,
any existing table record with primary key values
that match those of a record being inserted will
remain unchanged, while the new record will be
rejected and the error handled as determined by
ignore_existing_pk &
error_handling . If the
specified table does not have a primary key, then
this option has no effect.
Supported values:
- 'true': Upsert new records when
primary keys match existing records
- 'false': Reject new records when
primary keys match existing records
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_records_from_payload_request(request, callback) → {Promise}
Reads from the given text-based or binary payload and inserts the
data into a new or existing table. The table will be created if it doesn't
already exist.
Returns once all records are processed.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_records_from_query(table_name, remote_query, modify_columns, create_table_options, options, callback) → {Promise}
Computes remote query result and inserts the result data into a new or
existing table
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table into which the data will be
inserted, in
[schema_name.]table_name format, using standard
name resolution rules.
If the table does not exist, the table will be
created using either an existing
type_id or the type inferred from
the
remote query, and the new table name will have
to meet standard
table naming criteria. |
remote_query |
String
|
Query for which result data needs to be
imported |
modify_columns |
Object
|
Not implemented yet |
create_table_options |
Object
|
Options used when creating the target
table.
- 'type_id': ID of a
currently registered type. The default
value is ''.
- 'no_error_if_exists': If
true , prevents an error
from occurring if the table already
exists and is of the given type. If a
table with the same ID but a different
type exists, it is still an error.
Supported values:
The default value is 'false'.
- 'is_replicated': Affects
the distribution scheme
for the table's data. If
true and the given type
has no explicit shard key defined,
the table will be replicated. If
false , the table will be
sharded according to
the shard key specified in the given
type_id , or randomly sharded, if
no shard key is specified. Note that
a type containing a shard key cannot
be used to create a replicated table.
Supported values:
The default value is 'false'.
- 'foreign_keys':
Semicolon-separated list of foreign keys, of the
format '(source_column_name [, ...])
references
target_table_name(primary_key_column_name
[, ...]) [as foreign_key_name]'.
- 'foreign_shard_key':
Foreign shard key of the format
'source_column references
shard_by_column from
target_table(primary_key_column)'.
- 'partition_type': Partitioning scheme
to use.
Supported values:
- 'partition_keys':
Comma-separated list of partition
keys, which are the columns or column
expressions by which records will be
assigned to partitions defined by
partition_definitions .
- 'partition_definitions':
Comma-separated list of partition
definitions, whose format depends on
the choice of
partition_type . See range partitioning,
interval
partitioning, list partitioning,
hash partitioning,
or series partitioning
for example formats.
- 'is_automatic_partition':
If
true , a new partition
will be created for values which don't
fall into an existing partition.
Currently only supported for list partitions.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL of the table
specified in
table_name .
- 'chunk_size': Indicates
the number of records per chunk to be
used for this table.
- 'is_result_table':
Indicates whether the table is a memory-only table. A
result table cannot contain columns
with store_only or text_search data-handling or
that are non-charN strings,
and it will not be retained if the
server is restarted.
Supported values:
The default value is 'false'.
- 'strategy_definition':
The tier strategy for
the table and its columns.
|
options |
Object
|
Optional parameters.
- 'bad_record_table_name': Optional name
of a table to which records that were rejected are
written. The bad-record-table has the following
columns: line_number (long), line_rejected
(string), error_message (string). When error
handling is Abort, bad records table is not
populated.
- 'bad_record_table_limit': A positive
integer indicating the maximum number of records
that can be written to the bad-record-table.
Default value is 10000
- 'batch_size': Number of records per
batch when inserting data.
- 'datasource_name': Name of an existing
external data source from which table will be
loaded
- 'error_handling': Specifies how errors
should be handled upon insertion.
Supported values:
- 'permissive': Records with missing
columns are populated with nulls if possible;
otherwise, the malformed records are skipped.
- 'ignore_bad_records': Malformed
records are skipped.
- 'abort': Stops current insertion and
aborts entire operation when an error is
encountered. Primary key collisions are considered
abortable errors in this mode.
The default value is 'abort'.
- 'ignore_existing_pk': Specifies the
record collision error-suppression policy for
inserting into a table with a primary key, only used when
not in upsert mode (upsert mode is disabled when
update_on_existing_pk is
false ). If set to
true , any record being inserted that
is rejected
for having primary key values that match those of
an existing table record will be ignored with no
error generated. If false , the
rejection of any
record for having primary key values matching an
existing record will result in an error being
reported, as determined by
error_handling . If the specified
table does not
have a primary key or if upsert mode is in effect
(update_on_existing_pk is
true ), then this option has no effect.
Supported values:
- 'true': Ignore new records whose
primary key values collide with those of existing
records
- 'false': Treat as errors any new
records whose primary key values collide with those
of existing records
The default value is 'false'.
- 'ingestion_mode': Whether to do a full
load, dry run, or perform a type inference on the
source data.
Supported values:
- 'full': Run a type inference on the
source data (if needed) and ingest
- 'dry_run': Does not load data, but
walks through the source data and determines the
number of valid records, taking into account the
current mode of
error_handling .
- 'type_inference_only': Infer the type
of the source data and return, without ingesting
any data. The inferred type is returned in the
response.
The default value is 'full'.
- 'jdbc_fetch_size': The JDBC fetch
size, which determines how many rows to fetch per
round trip.
- 'jdbc_session_init_statement':
Executes the statement per each jdbc session before
doing actual load. The default value is ''.
- 'num_splits_per_rank': Optional:
number of splits for reading data per rank. Default
will be external_file_reader_num_tasks. The
default value is ''.
- 'num_tasks_per_rank': Optional: number
of tasks for reading data per rank. Default will be
external_file_reader_num_tasks
- 'primary_keys': Optional: comma
separated list of column names, to set as primary
keys, when not specified in the type. The default
value is ''.
- 'shard_keys': Optional: comma
separated list of column names, to set as primary
keys, when not specified in the type. The default
value is ''.
- 'subscribe': Continuously poll the
data source to check for new data and load it into
the table.
Supported values:
The default value is 'false'.
- 'truncate_table': If set to
true , truncates the table specified by
table_name prior to loading the data.
Supported values:
The default value is 'false'.
- 'remote_query': Remote SQL query from
which data will be sourced
- 'remote_query_order_by': Name of
column to be used for splitting the query into
multiple sub-queries using ordering of given
column. The default value is ''.
- 'remote_query_filter_column': Name of
column to be used for splitting the query into
multiple sub-queries using the data distribution of
given column. The default value is ''.
- 'remote_query_increasing_column':
Column on subscribed remote query result that will
increase for new records (e.g., TIMESTAMP). The
default value is ''.
- 'remote_query_partition_column': Alias
name for remote_query_filter_column. The default
value is ''.
- 'truncate_strings': If set to
true , truncate string values that are
longer than the column's type size.
Supported values:
The default value is 'false'.
- 'update_on_existing_pk': Specifies the
record collision policy for inserting into a table
with a primary key. If set to
true , any existing table record with
primary
key values that match those of a record being
inserted will be replaced by that new record (the
new
data will be "upserted"). If set to
false ,
any existing table record with primary key values
that match those of a record being inserted will
remain unchanged, while the new record will be
rejected and the error handled as determined by
ignore_existing_pk &
error_handling . If the
specified table does not have a primary key, then
this option has no effect.
Supported values:
- 'true': Upsert new records when
primary keys match existing records
- 'false': Reject new records when
primary keys match existing records
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_records_from_query_request(request, callback) → {Promise}
Computes remote query result and inserts the result data into a new or
existing table
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_records_random(table_name, count, options, callback) → {Promise}
Generates a specified number of random records and adds them to the given
table.
There is an optional parameter that allows the user to customize the ranges
of
the column values. It also allows the user to specify linear profiles for
some
or all columns in which case linear values are generated rather than random
ones. Only individual tables are supported for this operation.
This operation is synchronous, meaning that a response will not be returned
until all random records are fully available.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Table to which random records will be added, in
[schema_name.]table_name format, using standard
name resolution rules. Must
be an existing table, not a view. |
count |
Number
|
Number of records to generate. |
options |
Object
|
Optional parameter to pass in specifications for
the randomness of the values. This map is
different from the *options* parameter of most
other endpoints in that it is a map of string to
map of string to doubles, while most others are
maps of string to string. In this map, the top
level keys represent which column's parameters are
being specified, while the internal keys represents
which parameter is being specified. These
parameters take on different meanings depending on
the type of the column. Below follows a more
detailed description of the map:
- 'seed': If provided, the internal
random number generator will be initialized with
the given value. The minimum is 0. This allows
for the same set of random numbers to be generated
across invocation of this endpoint in case the user
wants to repeat the test. Since
options , is a map of maps, we need an
internal map to provide the seed value. For
example, to pass 100 as the seed value through this
parameter, you need something equivalent to:
'options' = {'seed': { 'value': 100 } }
- 'value': The seed value to use
- 'all': This key indicates that the
specifications relayed in the internal map are to
be applied to all columns of the records.
- 'min': For numerical columns, the
minimum of the generated values is set to this
value. Default is -99999. For point, shape, and
track columns, min for numeric 'x' and 'y' columns
needs to be within [-180, 180] and [-90, 90],
respectively. The default minimum possible values
for these columns in such cases are -180.0 and
-90.0. For the 'TIMESTAMP' column, the default
minimum corresponds to Jan 1, 2010.
For string columns, the minimum length of the
randomly generated strings is set to this value
(default is 0). If both minimum and maximum are
provided, minimum must be less than or equal to
max. Value needs to be within [0, 200].
If the min is outside the accepted ranges for
strings columns and 'x' and 'y' columns for
point/shape/track, then those parameters will not
be set; however, an error will not be thrown in
such a case. It is the responsibility of the user
to use the
all parameter judiciously.
- 'max': For numerical columns, the
maximum of the generated values is set to this
value. Default is 99999. For point, shape, and
track columns, max for numeric 'x' and 'y' columns
needs to be within [-180, 180] and [-90, 90],
respectively. The default minimum possible values
for these columns in such cases are 180.0 and 90.0.
For string columns, the maximum length of the
randomly generated strings is set to this value
(default is 200). If both minimum and maximum are
provided, *max* must be greater than or equal to
*min*. Value needs to be within [0, 200].
If the *max* is outside the accepted ranges for
strings columns and 'x' and 'y' columns for
point/shape/track, then those parameters will not
be set; however, an error will not be thrown in
such a case. It is the responsibility of the user
to use the
all parameter judiciously.
- 'interval': If specified, generate
values for all columns evenly spaced with the given
interval value. If a max value is specified for a
given column the data is randomly generated between
min and max and decimated down to the interval. If
no max is provided the data is linerally generated
starting at the minimum value (instead of
generating random data). For non-decimated
string-type columns the interval value is ignored.
Instead the values are generated following the
pattern: 'attrname_creationIndex#', i.e. the column
name suffixed with an underscore and a running
counter (starting at 0). For string types with
limited size (eg char4) the prefix is dropped. No
nulls will be generated for nullable columns.
- 'null_percentage': If specified, then
generate the given percentage of the count as nulls
for all nullable columns. This option will be
ignored for non-nullable columns. The value must
be within the range [0, 1.0]. The default value is
5% (0.05).
- 'cardinality': If specified, limit the
randomly generated values to a fixed set. Not
allowed on a column with interval specified, and is
not applicable to WKT or Track-specific columns.
The value must be greater than 0. This option is
disabled by default.
- 'attr_name': Use the desired column
name in place of
attr_name , and set
the following parameters for the column specified.
This overrides any parameter set by
all .
- 'min': For numerical columns, the
minimum of the generated values is set to this
value. Default is -99999. For point, shape, and
track columns, min for numeric 'x' and 'y' columns
needs to be within [-180, 180] and [-90, 90],
respectively. The default minimum possible values
for these columns in such cases are -180.0 and
-90.0. For the 'TIMESTAMP' column, the default
minimum corresponds to Jan 1, 2010.
For string columns, the minimum length of the
randomly generated strings is set to this value
(default is 0). If both minimum and maximum are
provided, minimum must be less than or equal to
max. Value needs to be within [0, 200].
If the min is outside the accepted ranges for
strings columns and 'x' and 'y' columns for
point/shape/track, then those parameters will not
be set; however, an error will not be thrown in
such a case. It is the responsibility of the user
to use the
all parameter judiciously.
- 'max': For numerical columns, the
maximum of the generated values is set to this
value. Default is 99999. For point, shape, and
track columns, max for numeric 'x' and 'y' columns
needs to be within [-180, 180] and [-90, 90],
respectively. The default minimum possible values
for these columns in such cases are 180.0 and 90.0.
For string columns, the maximum length of the
randomly generated strings is set to this value
(default is 200). If both minimum and maximum are
provided, *max* must be greater than or equal to
*min*. Value needs to be within [0, 200].
If the *max* is outside the accepted ranges for
strings columns and 'x' and 'y' columns for
point/shape/track, then those parameters will not
be set; however, an error will not be thrown in
such a case. It is the responsibility of the user
to use the
all parameter judiciously.
- 'interval': If specified, generate
values for all columns evenly spaced with the given
interval value. If a max value is specified for a
given column the data is randomly generated between
min and max and decimated down to the interval. If
no max is provided the data is linerally generated
starting at the minimum value (instead of
generating random data). For non-decimated
string-type columns the interval value is ignored.
Instead the values are generated following the
pattern: 'attrname_creationIndex#', i.e. the column
name suffixed with an underscore and a running
counter (starting at 0). For string types with
limited size (eg char4) the prefix is dropped. No
nulls will be generated for nullable columns.
- 'null_percentage': If specified and if
this column is nullable, then generate the given
percentage of the count as nulls. This option will
result in an error if the column is not nullable.
The value must be within the range [0, 1.0]. The
default value is 5% (0.05).
- 'cardinality': If specified, limit the
randomly generated values to a fixed set. Not
allowed on a column with interval specified, and is
not applicable to WKT or Track-specific columns.
The value must be greater than 0. This option is
disabled by default.
- 'track_length': This key-map pair is
only valid for track data sets (an error is thrown
otherwise). No nulls would be generated for
nullable columns.
- 'min': Minimum possible length for
generated series; default is 100 records per
series. Must be an integral value within the range
[1, 500]. If both min and max are specified, min
must be less than or equal to max.
- 'max': Maximum possible length for
generated series; default is 500 records per
series. Must be an integral value within the range
[1, 500]. If both min and max are specified, max
must be greater than or equal to min.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_records_random_request(request, callback) → {Promise}
Generates a specified number of random records and adds them to the given
table.
There is an optional parameter that allows the user to customize the ranges
of
the column values. It also allows the user to specify linear profiles for
some
or all columns in which case linear values are generated rather than random
ones. Only individual tables are supported for this operation.
This operation is synchronous, meaning that a response will not be returned
until all random records are fully available.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_records_request(request, callback) → {Promise}
Adds multiple records to the specified table. The operation is
synchronous, meaning that a response will not be returned until all the
records
are fully inserted and available. The response payload provides the counts
of
the number of records actually inserted and/or updated, and can provide the
unique identifier of each added record.
The options
parameter can be used to customize this function's
behavior.
The update_on_existing_pk
option specifies the record
collision policy for inserting into a table with a
primary
key, but is ignored if
no primary key exists.
The return_record_ids
option indicates that the
database should return the unique identifiers of inserted records.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_symbol(symbol_id, symbol_format, symbol_data, options, callback) → {Promise}
Adds a symbol or icon (i.e. an image) to represent data points when data is
rendered visually. Users must provide the symbol identifier (string), a
format (currently supported: 'svg' and 'svg_path'), the data for the symbol,
and any additional optional parameter (e.g. color). To have a symbol used
for rendering create a table with a string column named 'SYMBOLCODE' (along
with 'x' or 'y' for example). Then when the table is rendered (via
WMS) if the
'dosymbology' parameter is 'true' then the value of the 'SYMBOLCODE' column
is used to pick the symbol displayed for each point.
Parameters:
Name |
Type |
Description |
symbol_id |
String
|
The id of the symbol being added. This is the
same id that should be in the 'SYMBOLCODE' column
for objects using this symbol |
symbol_format |
String
|
Specifies the symbol format. Must be either
'svg' or 'svg_path'.
Supported values:
|
symbol_data |
String
|
The actual symbol data. If
symbol_format is 'svg' then this
should be the raw bytes representing an svg
file. If symbol_format is svg path
then this should be an svg path string, for
example:
'M25.979,12.896,5.979,12.896,5.979,19.562,25.979,19.562z' |
options |
Object
|
Optional parameters.
- 'color': If
symbol_format
is 'svg' this is ignored. If
symbol_format is 'svg_path' then this
option specifies the color (in RRGGBB hex format)
of the path. For example, to have the path rendered
in red, used 'FF0000'. If 'color' is not provided
then '00FF00' (i.e. green) is used by default.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
insert_symbol_request(request, callback) → {Promise}
Adds a symbol or icon (i.e. an image) to represent data points when data is
rendered visually. Users must provide the symbol identifier (string), a
format (currently supported: 'svg' and 'svg_path'), the data for the symbol,
and any additional optional parameter (e.g. color). To have a symbol used
for rendering create a table with a string column named 'SYMBOLCODE' (along
with 'x' or 'y' for example). Then when the table is rendered (via
WMS) if the
'dosymbology' parameter is 'true' then the value of the 'SYMBOLCODE' column
is used to pick the symbol displayed for each point.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
kill_proc(run_id, options, callback) → {Promise}
Kills a running proc instance.
Parameters:
Name |
Type |
Description |
run_id |
String
|
The run ID of a running proc instance. If a proc
with a matching run ID is not found or the proc
instance has already completed, no procs will be
killed. If not specified, all running proc instances
will be killed. |
options |
Object
|
Optional parameters.
- 'run_tag': If
run_id is
specified, kill the proc instance that has a
matching run ID and a matching run tag that was
provided to GPUdb#execute_proc . If
run_id is not specified, kill the proc
instance(s) where a matching run tag was provided
to GPUdb#execute_proc . The default
value is ''.
- 'clear_execute_at_startup': If
true , kill and remove the instance of
the proc matching the auto-start run ID that was
created to run when the database is started. The
auto-start run ID was returned from
GPUdb#execute_proc and can be
retrieved using GPUdb#show_proc .
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
kill_proc_request(request, callback) → {Promise}
Kills a running proc instance.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
lock_table(table_name, lock_type, options, callback) → {Promise}
Manages global access to a table's data. By default a table has a
lock_type
of read_write
, indicating all operations
are permitted. A user may request a read_only
or a
write_only
lock, after which only read or write operations,
respectively, are permitted on the table until the lock is removed. When
lock_type
is no_access
then no operations are
permitted on the table. The lock status can be queried by setting
lock_type
to status
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table to be locked, in
[schema_name.]table_name format, using standard
name resolution rules. It
must be a currently existing table or view. |
lock_type |
String
|
The type of lock being applied to the table.
Setting it to status will return the
current lock status of the table without changing
it.
Supported values:
- 'status': Show locked status
- 'no_access': Allow no read/write
operations
- 'read_only': Allow only read
operations
- 'write_only': Allow only write
operations
- 'read_write': Allow all read/write
operations
The default value is 'status'. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
lock_table_request(request, callback) → {Promise}
Manages global access to a table's data. By default a table has a
lock_type
of read_write
, indicating all operations
are permitted. A user may request a read_only
or a
write_only
lock, after which only read or write operations,
respectively, are permitted on the table until the lock is removed. When
lock_type
is no_access
then no operations are
permitted on the table. The lock status can be queried by setting
lock_type
to status
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
match_graph(graph_name, sample_points, solve_method, solution_table, options, callback) → {Promise}
Matches a directed route implied by a given set of
latitude/longitude points to an existing underlying road network graph using
a
given solution type.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, the
Graph REST
Tutorial,
and/or some
/match/graph
examples
before using this endpoint.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the underlying geospatial graph resource
to match to using sample_points . |
sample_points |
Array.<String>
|
Sample points used to match to an
underlying geospatial
graph. Sample points must be specified
using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with: existing
column names, e.g.,
'table.column AS SAMPLE_X'; expressions,
e.g.,
'ST_MAKEPOINT(table.x, table.y) AS
SAMPLE_WKTPOINT'; or constant values, e.g.,
'{1, 2, 10} AS SAMPLE_TRIPID'. |
solve_method |
String
|
The type of solver to use for graph matching.
Supported values:
- 'markov_chain': Matches
sample_points to the graph using
the Hidden Markov Model (HMM)-based method,
which conducts a range-tree closest-edge
search to find the best combinations of
possible road segments
(num_segments ) for each sample
point to create the best route. The route is
secured one point at a time while looking
ahead chain_width number of
points, so the prediction is corrected after
each point. This solution type is the most
accurate but also the most computationally
intensive. Related options:
num_segments and
chain_width .
- 'match_od_pairs': Matches
sample_points to find the most
probable path between origin and destination
pairs with cost constraints.
- 'match_supply_demand': Matches
sample_points to optimize
scheduling multiple supplies (trucks) with
varying sizes to varying demand sites with
varying capacities per depot. Related options:
partial_loading and
max_combinations .
- 'match_batch_solves': Matches
sample_points source and
destination pairs for the shortest path solves
in batch mode.
- 'match_loops': Matches closed
loops (Eulerian paths) originating and ending
at each graph node within min and max hops
(levels).
- 'match_charging_stations':
Matches an optimal path across a number of
ev-charging stations between source and target
locations.
- 'match_similarity': Matches the
intersection set(s) by computing the Jaccard
similarity score between node pairs.
- 'match_pickup_dropoff': Matches
the pickups and dropoffs by optimizing the
total trip costs
- 'match_clusters': Matches the
graph nodes with a cluster index using Louvain
clustering algorithm
- 'match_pattern': Matches a
pattern in the graph
The default value is 'markov_chain'. |
solution_table |
String
|
The name of the table used to store the
results, in [schema_name.]table_name format,
using standard name resolution rules and
meeting table naming criteria.
This table contains a track of geospatial points
for the matched portion of the graph, a
track ID, and a score value. Also outputs a
details table containing a trip ID (that
matches the track ID), the
latitude/longitude pair, the timestamp the
point was recorded at, and an edge ID
corresponding to the matched road segment.
Must not be an existing table of the same
name. |
options |
Object
|
Additional parameters
- 'gps_noise': GPS noise value (in
meters) to remove redundant sample points. Use -1
to disable noise reduction. The default value
accounts for 95% of point variation (+ or -5
meters). The default value is '5.0'.
- 'num_segments': Maximum number of
potentially matching road segments for each sample
point. For the
markov_chain solver,
the default is 3. The default value is '3'.
- 'search_radius': Maximum search radius
used when snapping sample points onto potentially
matching surrounding segments. The default value
corresponds to approximately 100 meters. The
default value is '0.001'.
- 'chain_width': For the
markov_chain solver only. Length of
the sample points lookahead window within the
Markov kernel; the larger the number, the more
accurate the solution. The default value is '9'.
- 'source': Optional WKT starting point
from
sample_points for the solver. The
default behavior for the endpoint is to use time to
determine the starting point. The default value is
'POINT NULL'.
- 'destination': Optional WKT ending
point from
sample_points for the
solver. The default behavior for the endpoint is to
use time to determine the destination point. The
default value is 'POINT NULL'.
- 'partial_loading': For the
match_supply_demand solver only. When
false (non-default), trucks do not off-load at the
demand (store) side if the remainder is less than
the store's need
Supported values:
- 'true': Partial off-loading at
multiple store (demand) locations
- 'false': No partial off-loading
allowed if supply is less than the store's demand.
The default value is 'true'.
- 'max_combinations': For the
match_supply_demand solver only. This
is the cutoff for the number of generated
combinations for sequencing the demand locations -
can increase this up to 2M. The default value is
'10000'.
- 'max_supply_combinations': For the
match_supply_demand solver only. This
is the cutoff for the number of generated
combinations for sequencing the supply locations
if/when 'permute_supplies' is true. The default
value is '10000'.
- 'left_turn_penalty': This will add an
additonal weight over the edges labelled as 'left
turn' if the 'add_turn' option parameter of the
GPUdb#create_graph was invoked at
graph creation. The default value is '0.0'.
- 'right_turn_penalty': This will add an
additonal weight over the edges labelled as' right
turn' if the 'add_turn' option parameter of the
GPUdb#create_graph was invoked at
graph creation. The default value is '0.0'.
- 'intersection_penalty': This will add
an additonal weight over the edges labelled as
'intersection' if the 'add_turn' option parameter
of the
GPUdb#create_graph was invoked
at graph creation. The default value is '0.0'.
- 'sharp_turn_penalty': This will add an
additonal weight over the edges labelled as 'sharp
turn' or 'u-turn' if the 'add_turn' option
parameter of the
GPUdb#create_graph
was invoked at graph creation. The default value
is '0.0'.
- 'aggregated_output': For the
match_supply_demand solver only. When
it is true (default), each record in the output
table shows a particular truck's scheduled
cumulative round trip path (MULTILINESTRING) and
the corresponding aggregated cost. Otherwise, each
record shows a single scheduled truck route
(LINESTRING) towards a particular demand location
(store id) with its corresponding cost. The
default value is 'true'.
- 'output_tracks': For the
match_supply_demand solver only. When
it is true (non-default), the output will be in
tracks format for all the round trips of each truck
in which the timestamps are populated directly from
the edge weights starting from their originating
depots. The default value is 'false'.
- 'max_trip_cost': For the
match_supply_demand and
match_pickup_dropoff solvers only. If
this constraint is greater than zero (default) then
the trucks/rides will skip travelling from one
demand/pick location to another if the cost between
them is greater than this number (distance or
time). Zero (default) value means no check is
performed. The default value is '0.0'.
- 'filter_folding_paths': For the
markov_chain solver only. When true
(non-default), the paths per sequence combination
is checked for folding over patterns and can
significantly increase the execution time depending
on the chain width and the number of gps samples.
Supported values:
- 'true': Filter out the folded paths.
- 'false': Do not filter out the folded
paths
The default value is 'false'.
- 'unit_unloading_cost': For the
match_supply_demand solver only. The
unit cost per load amount to be delivered. If this
value is greater than zero (default) then the
additional cost of this unit load multiplied by the
total dropped load will be added over to the trip
cost to the demand location. The default value is
'0.0'.
- 'max_num_threads': For the
markov_chain solver only. If specified
(greater than zero), the maximum number of threads
will not be greater than the specified value. It
can be lower due to the memory and the number cores
available. Default value of zero allows the
algorithm to set the maximal number of threads
within these constraints. The default value is
'0'.
- 'service_limit': For the
match_supply_demand solver only. If
specified (greater than zero), any supply actor's
total service cost (distance or time) will be
limited by the specified value including multiple
rounds (if set). The default value is '0.0'.
- 'enable_reuse': For the
match_supply_demand solver only. If
specified (true), all supply actors can be
scheduled for second rounds from their originating
depots.
Supported values:
- 'true': Allows reusing supply actors
(trucks, e.g.) for scheduling again.
- 'false': Supply actors are scheduled
only once from their depots.
The default value is 'false'.
- 'max_stops': For the
match_supply_demand solver only. If
specified (greater than zero), a supply actor
(truck) can at most have this many stops (demand
locations) in one round trip. Otherwise, it is
unlimited. If 'enable_truck_reuse' is on, this
condition will be applied separately at each round
trip use of the same truck. The default value is
'0'.
- 'service_radius': For the
match_supply_demand and
match_pickup_dropoff solvers only. If
specified (greater than zero), it filters the
demands/picks outside this radius centered around
the supply actor/ride's originating location
(distance or time). The default value is '0.0'.
- 'permute_supplies': For the
match_supply_demand solver only. If
specified (true), supply side actors are permuted
for the demand combinations during msdo
optimization - note that this option increases
optimization time significantly - use of
'max_combinations' option is recommended to prevent
prohibitively long runs
Supported values:
- 'true': Generates sequences over
supply side permutations if total supply is less
than twice the total demand
- 'false': Permutations are not
performed, rather a specific order of supplies
based on capacity is computed
The default value is 'true'.
- 'batch_tsm_mode': For the
match_supply_demand solver only. When
enabled, it sets the number of visits on each
demand location by a single salesman at each trip
is considered to be (one) 1, otherwise there is no
bound.
Supported values:
- 'true': Sets only one visit per demand
location by a salesman (tsm mode)
- 'false': No preset limit (usual msdo
mode)
The default value is 'false'.
- 'round_trip': For the
match_supply_demand solver only. When
enabled, the supply will have to return back to the
origination location.
Supported values:
- 'true': The optimization is done for
trips in round trip manner always returning to
originating locations
- 'false': Supplies do not have to come
back to their originating locations in their
routes. The routes are considered finished at the
final dropoff.
The default value is 'true'.
- 'num_cycles': For the
match_clusters solver only. Terminates
the cluster exchange iterations across
2-step-cycles (outer loop) when quality does not
improve during iterations. The default value is
'10'.
- 'num_loops_per_cycle': For the
match_clusters solver only. Terminates
the cluster exchanges within the first step
iterations of a cycle (inner loop) unless
convergence is reached. The default value is '10'.
- 'num_output_clusters': For the
match_clusters solver only. Limits
the output to the top 'num_output_clusters'
clusters based on density. Default value of zero
outputs all clusters. The default value is '0'.
- 'max_num_clusters': For the
match_clusters solver only. If set
(value greater than zero), it terminates when the
number of clusters goes below than this number.
The default value is '0'.
- 'cluster_quality_metric': For the
match_clusters solver only. The
quality metric for Louvain modularity optimization
solver.
Supported values:
- 'girvan': Uses the Newman Girvan
quality metric for cluster solver
- 'spectral': Applies recursive spectral
bisection (RSB) partitioning solver
The default value is 'girvan'.
- 'restricted_type': For the
match_supply_demand solver only.
Optimization is performed by restricting routes
labeled by 'MSDO_ODDEVEN_RESTRICTED' only for this
supply actor (truck) type
Supported values:
- 'odd': Applies odd/even rule
restrictions to odd tagged vehicles.
- 'even': Applies odd/even rule
restrictions to even tagged vehicles.
- 'none': Does not apply odd/even rule
restrictions to any vehicles.
The default value is 'none'.
- 'server_id': Indicates which graph
server(s) to send the request to. Default is to
send to the server, amongst those containing the
corresponding graph, that has the most
computational bandwidth. The default value is ''.
- 'inverse_solve': For the
match_batch_solves solver only. Solves
source-destination pairs using inverse shortest
path solver.
Supported values:
- 'true': Solves using inverse shortest
path solver.
- 'false': Solves using direct shortest
path solver.
The default value is 'false'.
- 'min_loop_level': For the
match_loops solver only. Finds closed
loops around each node deducible not less than this
minimal hop (level) deep. The default value is
'0'.
- 'max_loop_level': For the
match_loops solver only. Finds closed
loops around each node deducible not more than this
maximal hop (level) deep. The default value is
'5'.
- 'search_limit': For the
match_loops solver only. Searches
within this limit of nodes per vertex to detect
loops. The value zero means there is no limit. The
default value is '10000'.
- 'output_batch_size': For the
match_loops solver only. Uses this
value as the batch size of the number of loops in
flushing(inserting) to the output table. The
default value is '1000'.
- 'charging_capacity': For the
match_charging_stations solver only.
This is the maximum ev-charging capacity of a
vehicle (distance in meters or time in seconds
depending on the unit of the graph weights). The
default value is '300000.0'.
- 'charging_candidates': For the
match_charging_stations solver only.
Solver searches for this many number of stations
closest around each base charging location found by
capacity. The default value is '10'.
- 'charging_penalty': For the
match_charging_stations solver only.
This is the penalty for full charging. The default
value is '30000.0'.
- 'max_hops': For the
match_similarity solver only. Searches
within this maximum hops for source and target node
pairs to compute the Jaccard scores. The default
value is '3'.
- 'traversal_node_limit': For the
match_similarity solver only. Limits
the traversal depth if it reaches this many number
of nodes. The default value is '1000'.
- 'paired_similarity': For the
match_similarity solver only. If true,
it computes Jaccard score between each pair,
otherwise it will compute Jaccard from the
intersection set between the source and target
nodes
Supported values:
The default value is 'true'.
- 'force_undirected': For the
match_pattern solver only. Pattern
matching will be using both pattern and graph as
undirected if set to true.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
match_graph_request(request, callback) → {Promise}
Matches a directed route implied by a given set of
latitude/longitude points to an existing underlying road network graph using
a
given solution type.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, the
Graph REST
Tutorial,
and/or some
/match/graph
examples
before using this endpoint.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
merge_records(table_name, source_table_names, field_maps, options, callback) → {Promise}
Create a new empty result table (specified by
table_name
),
and insert all records from source tables
(specified by
source_table_names
) based on the field mapping
information (specified by
field_maps
).
For merge records details and examples, see
Merge Records.
For limitations, see
Merge Records Limitations and Cautions.
The field map (specified by field_maps
) holds the
user-specified maps
of target table column names to source table columns. The array of
field_maps
must match one-to-one with the
source_table_names
,
e.g., there's a map present in field_maps
for each table listed
in
source_table_names
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
The name of the new result table for the records
to be merged into, in [schema_name.]table_name
format, using standard name resolution rules and
meeting table naming criteria. Must
NOT be an existing table. |
source_table_names |
Array.<String>
|
The list of names of source tables to
get the records from, each in
[schema_name.]table_name format, using
standard name resolution
rules. Must be existing table
names. |
field_maps |
Array.<Object>
|
Contains a list of source/target column
mappings, one mapping for each source table
listed in source_table_names
being merged into the target table specified
by table_name . Each mapping
contains the target column names (as keys)
that the data in the mapped source columns or
column expressions (as values) will
be merged into. All of the source columns
being merged into a given target column must
match in type, as that type will determine the
type of the new target column. |
options |
Object
|
Optional parameters.
- 'create_temp_table': If
true , a unique temporary table name
will be generated in the sys_temp schema and used
in place of table_name . If
persist is false , then
this is always allowed even if the caller does not
have permission to create tables. The generated
name is returned in
qualified_table_name .
Supported values:
The default value is 'false'.
- 'collection_name': [DEPRECATED--please
specify the containing schema for the merged table
as part of
table_name and use
GPUdb#create_schema to create the
schema if non-existent] Name of a schema for the
newly created merged table specified by
table_name .
- 'is_replicated': Indicates the distribution scheme for the data
of the merged table specified in
table_name . If true, the table will
be replicated. If false, the table
will be randomly sharded.
Supported values:
The default value is 'false'.
- 'ttl': Sets the TTL
of the merged table specified in
table_name .
- 'persist': If
true , then
the table specified in table_name will
be persisted and will not expire unless a
ttl is specified. If
false , then the table will be an
in-memory table and will expire unless a
ttl is specified otherwise.
Supported values:
The default value is 'true'.
- 'chunk_size': Indicates the number of
records per chunk to be used for the merged table
specified in
table_name .
- 'view_id': view this result table is
part of. The default value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
merge_records_request(request, callback) → {Promise}
Create a new empty result table (specified by
table_name
),
and insert all records from source tables
(specified by
source_table_names
) based on the field mapping
information (specified by
field_maps
).
For merge records details and examples, see
Merge Records.
For limitations, see
Merge Records Limitations and Cautions.
The field map (specified by field_maps
) holds the
user-specified maps
of target table column names to source table columns. The array of
field_maps
must match one-to-one with the
source_table_names
,
e.g., there's a map present in field_maps
for each table listed
in
source_table_names
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
modify_graph(graph_name, nodes, edges, weights, restrictions, options, callback) → {Promise}
Update an existing graph network using given nodes, edges, weights,
restrictions, and options.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, and
Graph REST
Tutorial
before using this endpoint.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph resource to modify. |
nodes |
Array.<String>
|
Nodes with which to update existing
nodes in graph specified by
graph_name . Review Nodes for more information. Nodes
must be specified using identifiers; identifiers are
grouped as combinations. Identifiers can be
used with existing column names, e.g.,
'table.column AS NODE_ID', expressions, e.g.,
'ST_MAKEPOINT(column1, column2) AS NODE_WKTPOINT',
or raw values, e.g., '{9, 10, 11} AS NODE_ID'. If
using raw values in an identifier combination, the
number of values specified must match across the
combination. Identifier combination(s) do not have
to match the method used to create the graph, e.g.,
if column names were specified to create the graph,
expressions or raw values could also be used to
modify the graph. |
edges |
Array.<String>
|
Edges with which to update existing
edges in graph specified by
graph_name . Review Edges for more information. Edges
must be specified using identifiers; identifiers are
grouped as combinations. Identifiers can be
used with existing column names, e.g.,
'table.column AS EDGE_ID', expressions, e.g.,
'SUBSTR(column, 1, 6) AS EDGE_NODE1_NAME', or raw
values, e.g., "{'family', 'coworker'} AS
EDGE_LABEL". If using raw values in an identifier
combination, the number of values specified must
match across the combination. Identifier
combination(s) do not have to match the method used
to create the graph, e.g., if column names were
specified to create the graph, expressions or raw
values could also be used to modify the graph. |
weights |
Array.<String>
|
Weights with which to update existing
weights in graph specified by
graph_name . Review Weights for more information.
Weights must be specified using identifiers; identifiers are
grouped as combinations. Identifiers can
be used with existing column names, e.g.,
'table.column AS WEIGHTS_EDGE_ID', expressions,
e.g., 'ST_LENGTH(wkt) AS WEIGHTS_VALUESPECIFIED',
or raw values, e.g., '{4, 15} AS
WEIGHTS_VALUESPECIFIED'. If using raw values in
an identifier combination, the number of values
specified must match across the combination.
Identifier combination(s) do not have to match
the method used to create the graph, e.g., if
column names were specified to create the graph,
expressions or raw values could also be used to
modify the graph. |
restrictions |
Array.<String>
|
Restrictions with which to update existing
restrictions in graph specified
by graph_name . Review Restrictions for more
information. Restrictions must be specified
using identifiers; identifiers
are grouped as combinations. Identifiers
can be used with existing column names,
e.g., 'table.column AS
RESTRICTIONS_EDGE_ID', expressions, e.g.,
'column/2 AS RESTRICTIONS_VALUECOMPARED', or
raw values, e.g., '{0, 0, 0, 1} AS
RESTRICTIONS_ONOFFCOMPARED'. If using raw
values in an identifier combination, the
number of values specified must match across
the combination. Identifier combination(s)
do not have to match the method used to
create the graph, e.g., if column names were
specified to create the graph, expressions
or raw values could also be used to modify
the graph. |
options |
Object
|
Optional parameters.
- 'restriction_threshold_value':
Value-based restriction comparison. Any node or
edge with a RESTRICTIONS_VALUECOMPARED value
greater than the
restriction_threshold_value will not
be included in the graph.
- 'export_create_results': If set to
true , returns the graph topology in
the response as arrays.
Supported values:
The default value is 'false'.
- 'enable_graph_draw': If set to
true , adds a 'EDGE_WKTLINE' column
identifier to the specified
graph_table so the graph can be viewed
via WMS; for social and non-geospatial graphs, the
'EDGE_WKTLINE' column identifier will be populated
with spatial coordinates derived from a flattening
layout algorithm so the graph can still be viewed.
Supported values:
The default value is 'false'.
- 'save_persist': If set to
true , the graph will be saved in the
persist directory (see the config
reference for more information). If set to
false , the graph will be removed when
the graph server is shutdown.
Supported values:
The default value is 'false'.
- 'add_table_monitor': Adds a table
monitor to every table used in the creation of the
graph; this table monitor will trigger the graph to
update dynamically upon inserts to the source
table(s). Note that upon database restart, if
save_persist is also set to
true , the graph will be fully
reconstructed and the table monitors will be
reattached. For more details on table monitors, see
GPUdb#create_table_monitor .
Supported values:
The default value is 'false'.
- 'graph_table': If specified, the
created graph is also created as a table with the
given name, in [schema_name.]table_name format,
using standard name resolution rules and meeting
table naming criteria. This
table will have the following identifier columns:
'EDGE_ID', 'EDGE_NODE1_ID', 'EDGE_NODE2_ID'. If
left blank, no table is created. The default value
is ''.
- 'remove_label_only': When RESTRICTIONS
on labeled entities requested, if set to true this
will NOT delete the entity but only the label
associated with the entity. Otherwise (default),
it'll delete the label AND the entity.
Supported values:
The default value is 'false'.
- 'add_turns': Adds dummy 'pillowed'
edges around intersection nodes where there are
more than three edges so that additional weight
penalties can be imposed by the solve endpoints.
(increases the total number of edges).
Supported values:
The default value is 'false'.
- 'turn_angle': Value in degrees
modifies the thresholds for attributing right,
left, sharp turns, and intersections. It is the
vertical deviation angle from the incoming edge to
the intersection node. The larger the value, the
larger the threshold for sharp turns and
intersections; the smaller the value, the larger
the threshold for right and left turns; 0 <
turn_angle < 90. The default value is '60'.
- 'use_rtree': Use an range tree
structure to accelerate and improve the accuracy of
snapping, especially to edges.
Supported values:
The default value is 'true'.
- 'label_delimiter': If provided the
label string will be split according to this
delimiter and each sub-string will be applied as a
separate label onto the specified edge. The
default value is ''.
- 'allow_multiple_edges': Multigraph
choice; allowing multiple edges with the same node
pairs if set to true, otherwise, new edges with
existing same node pairs will not be inserted.
Supported values:
The default value is 'true'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
modify_graph_request(request, callback) → {Promise}
Update an existing graph network using given nodes, edges, weights,
restrictions, and options.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, and
Graph REST
Tutorial
before using this endpoint.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
query_graph(graph_name, queries, restrictions, adjacency_table, rings, options, callback) → {Promise}
Employs a topological query on a network graph generated a-priori by
GPUdb#create_graph
and returns a list of adjacent edge(s) or
node(s),
also known as an adjacency list, depending on what's been provided to the
endpoint; providing edges will return nodes and providing nodes will return
edges.
To determine the node(s) or edge(s) adjacent to a value from a given column,
provide a list of values to queries
. This field can be
populated with
column values from any table as long as the type is supported by the given
identifier. See
Query Identifiers
for more information.
To return the adjacency list in the response, leave
adjacency_table
empty.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, the
Graph REST
Tutorial,
and/or some
/match/graph
examples
before using this endpoint.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph resource to query. |
queries |
Array.<String>
|
Nodes or edges to be queried specified using query identifiers. Identifiers
can be used with existing column names, e.g.,
'table.column AS QUERY_NODE_ID', raw values,
e.g., '{0, 2} AS QUERY_NODE_ID', or expressions,
e.g., 'ST_MAKEPOINT(table.x, table.y) AS
QUERY_NODE_WKTPOINT'. Multiple values can be
provided as long as the same identifier is used
for all values. If using raw values in an
identifier combination, the number of values
specified must match across the combination. |
restrictions |
Array.<String>
|
Additional restrictions to apply to the
nodes/edges of an existing graph.
Restrictions must be specified using identifiers; identifiers
are grouped as combinations. Identifiers
can be used with existing column names,
e.g., 'table.column AS
RESTRICTIONS_EDGE_ID', expressions, e.g.,
'column/2 AS RESTRICTIONS_VALUECOMPARED', or
raw values, e.g., '{0, 0, 0, 1} AS
RESTRICTIONS_ONOFFCOMPARED'. If using raw
values in an identifier combination, the
number of values specified must match across
the combination. |
adjacency_table |
String
|
Name of the table to store the resulting
adjacencies, in [schema_name.]table_name
format, using standard name resolution rules and
meeting table naming criteria.
If left blank, the query results are
instead returned in the response. If the
'QUERY_TARGET_NODE_LABEL' query identifier is used
in queries , then two
additional columns will be available:
'PATH_ID' and 'RING_ID'. See Using Labels for more
information. |
rings |
Number
|
Sets the number of rings around the node to query for
adjacency, with '1' being the edges directly attached
to the queried node. Also known as number of hops.
For example, if it is set to '2', the edge(s)
directly attached to the queried node(s) will be
returned; in addition, the edge(s) attached to the
node(s) attached to the initial ring of edge(s)
surrounding the queried node(s) will be returned. If
the value is set to '0', any nodes that meet the
criteria in queries and
restrictions will be returned. This
parameter is only applicable when querying nodes. |
options |
Object
|
Additional parameters
- 'force_undirected': If set to
true , all inbound edges and outbound
edges relative to the node will be returned. If set
to false , only outbound edges relative
to the node will be returned. This parameter is
only applicable if the queried graph
graph_name is directed and when
querying nodes. Consult Directed Graphs for more details.
Supported values:
The default value is 'false'.
- 'limit': When specified (>0), limits
the number of query results. The size of the nodes
table will be limited by the
limit
value. The default value is '0'.
- 'output_wkt_path': If true then
concatenated wkt line segments will be added as the
WKT column of the adjacency table.
Supported values:
The default value is 'false'.
- 'and_labels': If set to
true , the result of the query has
entities that satisfy all of the target labels,
instead of any.
Supported values:
The default value is 'false'.
- 'server_id': Indicates which graph
server(s) to send the request to. Default is to
send to the server, amongst those containing the
corresponding graph, that has the most
computational bandwidth.
- 'output_charn_length': When specified
(>0 and <=256), limits the number of char length on
the output tables for string based nodes. The
default length is 64. The default value is '64'.
- 'find_common_labels': If set to true,
for many-to-many queries or multi-level traversals,
it lists the common labels between the source and
target nodes and edge labels in each path.
Otherwise (zero rings), it'll list all labels of
the node(s) queried.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
query_graph_request(request, callback) → {Promise}
Employs a topological query on a network graph generated a-priori by
GPUdb#create_graph
and returns a list of adjacent edge(s) or
node(s),
also known as an adjacency list, depending on what's been provided to the
endpoint; providing edges will return nodes and providing nodes will return
edges.
To determine the node(s) or edge(s) adjacent to a value from a given column,
provide a list of values to queries
. This field can be
populated with
column values from any table as long as the type is supported by the given
identifier. See
Query Identifiers
for more information.
To return the adjacency list in the response, leave
adjacency_table
empty.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, the
Graph REST
Tutorial,
and/or some
/match/graph
examples
before using this endpoint.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
Removes the given HTTP header from the map of additional HTTP headers
to send to GPUdb with each request. The user is not allowed to remove
the following protected headers:
- 'Accept'
- 'Authorization'
- 'Content-type'
- 'X-Kinetica-Group'
Parameters:
Name |
Type |
Description |
header |
String
|
The header to remove. |
- Source:
repartition_graph(graph_name, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph resource to rebalance. |
options |
Object
|
Optional parameters.
- 'new_graph_name': If a non-empty value
is specified, the original graph will be kept
(non-default behaviour) and a new balanced graph
will be created under this given name. When the
value is empty (default), the generated 'balanced'
graph will replace the original 'unbalanced'
graph under the same graph name. The default value
is ''.
- 'source_node': The distributed
shortest path solve is run from this source node to
all the nodes in the graph to create balaced
partitions using the iso-distance levels of the
solution. The source node is selected by the
rebalance algorithm automatically (default case
when
the value is an empty string). Otherwise, the user
specified node is used as the source. The default
value is ''.
- 'sql_request_avro_json': The default
value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
repartition_graph_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission(principal, object, object_type, permission, options, callback) → {Promise}
Revoke user or role the specified permission on the specified object.
Parameters:
Name |
Type |
Description |
principal |
String
|
Name of the user or role for which the permission
is being revoked. Must be an existing user or
role. |
object |
String
|
Name of object permission is being revoked from. It
is recommended to use a fully-qualified name when
possible. |
object_type |
String
|
The type of object being revoked
Supported values:
- 'context': Context
- 'credential': Credential
- 'datasink': Data Sink
- 'datasource': Data Source
- 'directory': KIFS File Directory
- 'graph': A Graph object
- 'proc': UDF Procedure
- 'schema': Schema
- 'sql_proc': SQL Procedure
- 'system': System-level access
- 'table': Database Table
- 'table_monitor': Table monitor
|
permission |
String
|
Permission being revoked.
Supported values:
- 'admin': Full read/write and
administrative access on the object.
- 'connect': Connect access on the
given data source or data sink.
- 'delete': Delete rows from tables.
- 'execute': Ability to Execute the
Procedure object.
- 'insert': Insert access to tables.
- 'read': Ability to read, list and
use the object.
- 'update': Update access to the
table.
- 'user_admin': Access to administer
users and roles that do not have system_admin
permission.
- 'write': Access to write, change
and delete objects.
|
options |
Object
|
Optional parameters.
- 'columns': Revoke table security from
these columns, comma-separated. The default value
is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_credential(name, permission, credential_name, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role from which the permission
will be revoked. Must be an existing user or role. |
permission |
String
|
Permission to revoke from the user or role.
Supported values:
- 'credential_admin': Full read/write
and administrative access on the credential.
- 'credential_read': Ability to read
and use the credential.
|
credential_name |
String
|
Name of the credential on which the
permission will be revoked. Must be an
existing credential, or an empty string to
revoke access on all credentials. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_credential_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_datasource(name, permission, datasource_name, options, callback) → {Promise}
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role from which the permission
will be revoked. Must be an existing user or role. |
permission |
String
|
Permission to revoke from the user or role
Supported values:
- 'admin': Admin access on the given
data source
- 'connect': Connect access on the
given data source
|
datasource_name |
String
|
Name of the data source on which the
permission will be revoked. Must be an
existing data source, or an empty string to
revoke permission from all data sources. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_datasource_request(request, callback) → {Promise}
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_directory(name, permission, directory_name, options, callback) → {Promise}
Revokes a
KiFS
directory-level permission from a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role from which the permission
will be revoked. Must be an existing user or role. |
permission |
String
|
Permission to revoke from the user or role.
Supported values:
- 'directory_read': For files in the
directory, access to list files, download files,
or use files in server side functions
- 'directory_write': Access to upload
files to, or delete files from, the directory. A
user or role with write access automatically has
read acceess
|
directory_name |
String
|
Name of the KiFS directory to which the
permission revokes access |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_directory_request(request, callback) → {Promise}
Revokes a
KiFS
directory-level permission from a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_proc(name, permission, proc_name, options, callback) → {Promise}
Revokes a proc-level permission from a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role from which the permission
will be revoked. Must be an existing user or role. |
permission |
String
|
Permission to revoke from the user or role.
Supported values:
- 'proc_admin': Admin access to the
proc.
- 'proc_execute': Execute access to
the proc.
|
proc_name |
String
|
Name of the proc to which the permission grants
access. Must be an existing proc, or an empty
string if the permission grants access to all
procs. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_proc_request(request, callback) → {Promise}
Revokes a proc-level permission from a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_request(request, callback) → {Promise}
Revoke user or role the specified permission on the specified object.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_system(name, permission, options, callback) → {Promise}
Revokes a system-level permission from a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role from which the permission
will be revoked. Must be an existing user or role. |
permission |
String
|
Permission to revoke from the user or role.
Supported values:
- 'system_admin': Full access to all
data and system functions.
- 'system_user_admin': Access to
administer users and roles that do not have
system_admin permission.
- 'system_write': Read and write
access to all tables.
- 'system_read': Read-only access to
all tables.
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_system_request(request, callback) → {Promise}
Revokes a system-level permission from a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_table(name, permission, table_name, options, callback) → {Promise}
Revokes a table-level permission from a user or role.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the user or role from which the permission
will be revoked. Must be an existing user or role. |
permission |
String
|
Permission to revoke from the user or role.
Supported values:
- 'table_admin': Full read/write and
administrative access to the table.
- 'table_insert': Insert access to
the table.
- 'table_update': Update access to
the table.
- 'table_delete': Delete access to
the table.
- 'table_read': Read access to the
table.
|
table_name |
String
|
Name of the table to which the permission grants
access, in [schema_name.]table_name format,
using standard name resolution rules. Must
be an existing table, view or schema. |
options |
Object
|
Optional parameters.
- 'columns': Apply security to these
columns, comma-separated. The default value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_permission_table_request(request, callback) → {Promise}
Revokes a table-level permission from a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_role(role, member, options, callback) → {Promise}
Revokes membership in a role from a user or role.
Parameters:
Name |
Type |
Description |
role |
String
|
Name of the role in which membership will be revoked.
Must be an existing role. |
member |
String
|
Name of the user or role that will be revoked
membership in role . Must be an existing
user or role. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
revoke_role_request(request, callback) → {Promise}
Revokes membership in a role from a user or role.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_credential(credential_name, options, callback) → {Promise}
Shows information about a specified
credential or all credentials.
Parameters:
Name |
Type |
Description |
credential_name |
String
|
Name of the credential on which to retrieve
information. The name must refer to a
currently existing credential. If '*' is
specified, information about all
credentials will be returned. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_credential_request(request, callback) → {Promise}
Shows information about a specified
credential or all credentials.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_datasink(name, options, callback) → {Promise}
Shows information about a specified
data sink or all data sinks.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the data sink for which to retrieve
information. The name must refer to a currently
existing data sink. If '*' is specified, information
about all data sinks will be returned. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_datasink_request(request, callback) → {Promise}
Shows information about a specified
data sink or all data sinks.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_datasource(name, options, callback) → {Promise}
Shows information about a specified
data source or all
data sources.
Parameters:
Name |
Type |
Description |
name |
String
|
Name of the data source for which to retrieve
information. The name must refer to a currently
existing data source. If '*' is specified, information
about all data sources will be returned. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_datasource_request(request, callback) → {Promise}
Shows information about a specified
data source or all
data sources.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_directories(directory_name, options, callback) → {Promise}
Shows information about directories in
KiFS. Can be used to show a single directory, or all
directories.
Parameters:
Name |
Type |
Description |
directory_name |
String
|
The KiFS directory name to show. If empty,
shows all directories. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_directories_request(request, callback) → {Promise}
Shows information about directories in
KiFS. Can be used to show a single directory, or all
directories.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_environment(environment_name, options, callback) → {Promise}
Shows information about a specified
user-defined function (UDF) environment or all
environments.
Returns detailed information about existing environments.
Parameters:
Name |
Type |
Description |
environment_name |
String
|
Name of the environment on which to
retrieve information. The name must refer
to a currently existing environment. If
'*' or an empty value is specified,
information about all environments will be
returned. |
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
true and if the environment specified
in environment_name does not exist, no
error is returned. If false and if the
environment specified in
environment_name does not exist, then
an error is returned.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_environment_request(request, callback) → {Promise}
Shows information about a specified
user-defined function (UDF) environment or all
environments.
Returns detailed information about existing environments.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_files(paths, options, callback) → {Promise}
Shows information about files in
KiFS. Can be used for individual files, or to show all
files in a given directory.
Parameters:
Name |
Type |
Description |
paths |
Array.<String>
|
File paths to show. Each path can be a KiFS
directory name, or a full path to a KiFS file. File
paths may contain wildcard characters after the
KiFS directory delimeter.
Accepted wildcard characters are asterisk (*) to
represent any string of zero or more characters,
and question mark (?) to indicate a single
character. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_files_request(request, callback) → {Promise}
Shows information about files in
KiFS. Can be used for individual files, or to show all
files in a given directory.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_graph(graph_name, options, callback) → {Promise}
Shows information and characteristics of graphs that exist on the graph
server.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph on which to retrieve
information. If left as the default value,
information about all graphs is returned. |
options |
Object
|
Optional parameters.
- 'show_original_request': If set to
true , the request that was originally
used to create the graph is also returned as JSON.
Supported values:
The default value is 'true'.
- 'server_id': Indicates which graph
server(s) to send the request to. Default is to
send to get information about all the servers.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_graph_request(request, callback) → {Promise}
Shows information and characteristics of graphs that exist on the graph
server.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_proc(proc_name, options, callback) → {Promise}
Shows information about a proc.
Parameters:
Name |
Type |
Description |
proc_name |
String
|
Name of the proc to show information about. If
specified, must be the name of a currently
existing proc. If not specified, information
about all procs will be returned. |
options |
Object
|
Optional parameters.
- 'include_files': If set to
true , the files that make up the proc
will be returned. If set to false , the
files will not be returned.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_proc_request(request, callback) → {Promise}
Shows information about a proc.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_proc_status(run_id, options, callback) → {Promise}
Shows the statuses of running or completed proc instances. Results are
grouped by run ID (as returned from
GPUdb#execute_proc
) and
data segment ID (each invocation of the proc command on a data segment is
assigned a data segment ID).
Parameters:
Name |
Type |
Description |
run_id |
String
|
The run ID of a specific proc instance for which the
status will be returned. If a proc with a matching
run ID is not found, the response will be empty. If
not specified, the statuses of all executed proc
instances will be returned. |
options |
Object
|
Optional parameters.
- 'clear_complete': If set to
true , if a proc instance has completed
(either successfully or unsuccessfully) then its
status will be cleared and no longer returned in
subsequent calls.
Supported values:
The default value is 'false'.
- 'run_tag': If
run_id is
specified, return the status for a proc instance
that has a matching run ID and a matching run tag
that was provided to
GPUdb#execute_proc . If
run_id is not specified, return
statuses for all proc instances where a matching
run tag was provided to
GPUdb#execute_proc . The default
value is ''.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_proc_status_request(request, callback) → {Promise}
Shows the statuses of running or completed proc instances. Results are
grouped by run ID (as returned from
GPUdb#execute_proc
) and
data segment ID (each invocation of the proc command on a data segment is
assigned a data segment ID).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_resource_groups(names, options, callback) → {Promise}
Requests resource group properties.
Returns detailed information about the requested resource groups.
Parameters:
Name |
Type |
Description |
names |
Array.<String>
|
List of names of groups to be shown. A single entry
with an empty string returns all groups. |
options |
Object
|
Optional parameters.
- 'show_default_values': If
true include values of fields that are
based on the default resource group.
Supported values:
The default value is 'true'.
- 'show_default_group': If
true include the default and system
resource groups in the response. This value
defaults to false if an explicit list of group
names is provided, and true otherwise.
Supported values:
The default value is 'true'.
- 'show_tier_usage': If
true include the resource group usage
on the worker ranks in the response.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_resource_groups_request(request, callback) → {Promise}
Requests resource group properties.
Returns detailed information about the requested resource groups.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_resource_objects(options, callback) → {Promise}
Returns information about the internal sub-components (tiered objects)
which use resources of the system. The request can either return results
from
actively used objects (default) or it can be used to query the status of the
objects of a given list of tables.
Returns detailed information about the requested resource objects.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters.
- 'tiers': Comma-separated list of tiers
to query, leave blank for all tiers.
- 'expression': An expression to filter
the returned objects. Expression is
limited to the following operators:
=,!=,<,<=,>,>=,+,-,*,AND,OR,LIKE. For details see
Expressions. To use a more
complex expression, query the
ki_catalog.ki_tiered_objects table directly.
- 'order_by': Single column to be sorted
by as well as the sort direction, e.g., 'size asc'.
Supported values:
- 'size'
- 'id'
- 'priority'
- 'tier'
- 'evictable'
- 'owner_resource_group'
- 'limit': An integer indicating the
maximum number of results to be
returned, per rank, or (-1) to indicate that the
maximum number of results allowed by the server
should be returned. The number of records returned
will never exceed the server's own limit,
defined by the max_get_records_size parameter in
the server
configuration. The default value is '100'.
- 'table_names': Comma-separated list of
tables to restrict the results to. Use '*' to show
all tables.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_resource_objects_request(request, callback) → {Promise}
Returns information about the internal sub-components (tiered objects)
which use resources of the system. The request can either return results
from
actively used objects (default) or it can be used to query the status of the
objects of a given list of tables.
Returns detailed information about the requested resource objects.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_resource_statistics(options, callback) → {Promise}
Requests various statistics for storage/memory tiers and resource groups.
Returns statistics on a per-rank basis.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_resource_statistics_request(request, callback) → {Promise}
Requests various statistics for storage/memory tiers and resource groups.
Returns statistics on a per-rank basis.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_schema(schema_name, options, callback) → {Promise}
Retrieves information about a
schema (or all schemas), as specified in
schema_name
.
Parameters:
Name |
Type |
Description |
schema_name |
String
|
Name of the schema for which to retrieve the
information. If blank, then info for all
schemas is returned. |
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
false will return an error if the
provided schema_name does not exist.
If true then it will return an empty
result if the provided schema_name
does not exist.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_schema_request(request, callback) → {Promise}
Retrieves information about a
schema (or all schemas), as specified in
schema_name
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_security(names, options, callback) → {Promise}
Shows security information relating to users and/or roles. If the caller is
not a system administrator, only information relating to the caller and
their roles is returned.
Parameters:
Name |
Type |
Description |
names |
Array.<String>
|
A list of names of users and/or roles about which
security information is requested. If none are
provided, information about all users and roles
will be returned. |
options |
Object
|
Optional parameters.
- 'show_current_user': If
true , returns only security
information for the current user.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_security_request(request, callback) → {Promise}
Shows security information relating to users and/or roles. If the caller is
not a system administrator, only information relating to the caller and
their roles is returned.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_sql_proc(procedure_name, options, callback) → {Promise}
Shows information about SQL procedures, including the full definition of
each requested procedure.
Parameters:
Name |
Type |
Description |
procedure_name |
String
|
Name of the procedure for which to retrieve
the information. If blank, then information
about all procedures is returned. |
options |
Object
|
Optional parameters.
- 'no_error_if_not_exists': If
true , no error will be returned if the
requested procedure does not exist. If
false , an error will be returned if
the requested procedure does not exist.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_sql_proc_request(request, callback) → {Promise}
Shows information about SQL procedures, including the full definition of
each requested procedure.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_statistics(table_names, options, callback) → {Promise}
Retrieves the collected column statistics for the specified table(s).
Parameters:
Name |
Type |
Description |
table_names |
Array.<String>
|
Names of tables whose metadata will be
fetched, each in [schema_name.]table_name
format, using standard name resolution rules. All
provided tables must exist, or an error is
returned. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_statistics_request(request, callback) → {Promise}
Retrieves the collected column statistics for the specified table(s).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_system_properties(options, callback) → {Promise}
Returns server configuration and version related information to the caller.
The admin tool uses it to present server related information to the user.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters.
- 'properties': A list of comma
separated names of properties requested. If not
specified, all properties will be returned.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_system_properties_request(request, callback) → {Promise}
Returns server configuration and version related information to the caller.
The admin tool uses it to present server related information to the user.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_system_status(options, callback) → {Promise}
Provides server configuration and health related status to the caller. The
admin tool uses it to present server related information to the user.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters, currently unused. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_system_status_request(request, callback) → {Promise}
Provides server configuration and health related status to the caller. The
admin tool uses it to present server related information to the user.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_system_timing(options, callback) → {Promise}
Returns the last 100 database requests along with the request timing and
internal job id. The admin tool uses it to present request timing
information to the user.
Parameters:
Name |
Type |
Description |
options |
Object
|
Optional parameters, currently unused. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_system_timing_request(request, callback) → {Promise}
Returns the last 100 database requests along with the request timing and
internal job id. The admin tool uses it to present request timing
information to the user.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_table(table_name, options, callback) → {Promise}
Retrieves detailed information about a table, view, or schema,
specified in
table_name
. If the supplied
table_name
is a
schema the call can return information about either the schema itself or the
tables and views it contains. If
table_name
is empty,
information about
all schemas will be returned.
If the option get_sizes
is set to
true
, then the number of records
in each table is returned (in sizes
and
full_sizes
), along with the total number of objects across all
requested tables (in total_size
and
total_full_size
).
For a schema, setting the show_children
option to
false
returns only information
about the schema itself; setting show_children
to
true
returns a list of tables and
views contained in the schema, along with their corresponding detail.
To retrieve a list of every table, view, and schema in the database, set
table_name
to '*' and show_children
to
true
. When doing this, the
returned total_size
and total_full_size
will not
include the sizes of
non-base tables (e.g., filters, views, joins, etc.).
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table for which to retrieve the
information, in [schema_name.]table_name format,
using standard name resolution rules. If
blank, then returns information about all tables
and views. |
options |
Object
|
Optional parameters.
- 'force_synchronous': If
true then the table sizes will wait
for read lock before returning.
Supported values:
The default value is 'true'.
- 'get_sizes': If
true then
the number of records in each table, along with a
cumulative count, will be returned; blank,
otherwise.
Supported values:
The default value is 'false'.
- 'get_cached_sizes': If
true then the number of records in
each table, along with a cumulative count, will be
returned; blank, otherwise. This version will
return the sizes cached at rank 0, which may be
stale if there is a multihead insert occuring.
Supported values:
The default value is 'false'.
- 'show_children': If
table_name is a schema, then
true will return information about the
tables and views in the schema, and
false will return information about
the schema itself. If table_name is a
table or view, show_children must be
false . If table_name is
empty, then show_children must be
true .
Supported values:
The default value is 'true'.
- 'no_error_if_not_exists': If
false will return an error if the
provided table_name does not exist. If
true then it will return an empty
result.
Supported values:
The default value is 'false'.
- 'get_column_info': If
true then column info (memory usage,
etc) will be returned.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
Retrieves the user provided metadata for the specified tables.
Parameters:
Name |
Type |
Description |
table_names |
Array.<String>
|
Names of tables whose metadata will be
fetched, in [schema_name.]table_name format,
using standard name resolution rules. All
provided tables must exist, or an error is
returned. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
Retrieves the user provided metadata for the specified tables.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_table_monitors(monitor_ids, options, callback) → {Promise}
Show table monitors and their properties. Table monitors are created using
GPUdb#create_table_monitor
.
Returns detailed information about existing table monitors.
Parameters:
Name |
Type |
Description |
monitor_ids |
Array.<String>
|
List of monitors to be shown. An empty list
or a single entry with an empty string
returns all table monitors. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_table_monitors_request(request, callback) → {Promise}
Show table monitors and their properties. Table monitors are created using
GPUdb#create_table_monitor
.
Returns detailed information about existing table monitors.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_table_request(request, callback) → {Promise}
Retrieves detailed information about a table, view, or schema,
specified in
table_name
. If the supplied
table_name
is a
schema the call can return information about either the schema itself or the
tables and views it contains. If
table_name
is empty,
information about
all schemas will be returned.
If the option get_sizes
is set to
true
, then the number of records
in each table is returned (in sizes
and
full_sizes
), along with the total number of objects across all
requested tables (in total_size
and
total_full_size
).
For a schema, setting the show_children
option to
false
returns only information
about the schema itself; setting show_children
to
true
returns a list of tables and
views contained in the schema, along with their corresponding detail.
To retrieve a list of every table, view, and schema in the database, set
table_name
to '*' and show_children
to
true
. When doing this, the
returned total_size
and total_full_size
will not
include the sizes of
non-base tables (e.g., filters, views, joins, etc.).
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_tables_by_type(type_id, label, options, callback) → {Promise}
Gets names of the tables whose type matches the given criteria. Each table
has a particular type. This type comprises the schema and properties of the
table and sometimes a type label. This function allows a look up of the
existing tables based on full or partial type information. The operation is
synchronous.
Parameters:
Name |
Type |
Description |
type_id |
String
|
Type id returned by a call to
GPUdb#create_type . |
label |
String
|
Optional user supplied label which can be used
instead of the type_id to retrieve all tables with
the given label. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_tables_by_type_request(request, callback) → {Promise}
Gets names of the tables whose type matches the given criteria. Each table
has a particular type. This type comprises the schema and properties of the
table and sometimes a type label. This function allows a look up of the
existing tables based on full or partial type information. The operation is
synchronous.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_triggers(trigger_ids, options, callback) → {Promise}
Retrieves information regarding the specified triggers or all existing
triggers currently active.
Parameters:
Name |
Type |
Description |
trigger_ids |
Array.<String>
|
List of IDs of the triggers whose information
is to be retrieved. An empty list means
information will be retrieved on all active
triggers. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_triggers_request(request, callback) → {Promise}
Retrieves information regarding the specified triggers or all existing
triggers currently active.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_types(type_id, label, options, callback) → {Promise}
Retrieves information for the specified data type ID or type label. For all
data types that match the input criteria, the database returns the type ID,
the type schema, the label (if available), and the type's column properties.
Parameters:
Name |
Type |
Description |
type_id |
String
|
Type Id returned in response to a call to
GPUdb#create_type . |
label |
String
|
Option string that was supplied by user in a call to
GPUdb#create_type . |
options |
Object
|
Optional parameters.
- 'no_join_types': When set to 'true',
no join types will be included.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_types_request(request, callback) → {Promise}
Retrieves information for the specified data type ID or type label. For all
data types that match the input criteria, the database returns the type ID,
the type schema, the label (if available), and the type's column properties.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_video(paths, options, callback) → {Promise}
Retrieves information about rendered videos.
Parameters:
Name |
Type |
Description |
paths |
Array.<String>
|
The fully-qualified KiFS paths for the videos to
show. If empty, shows all videos. |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
show_video_request(request, callback) → {Promise}
Retrieves information about rendered videos.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
solve_graph(graph_name, weights_on_edges, restrictions, solver_type, source_nodes, destination_nodes, solution_table, options, callback) → {Promise}
Solves an existing graph for a type of problem (e.g., shortest path,
page rank, travelling salesman, etc.) using source nodes, destination nodes,
and
additional, optional weights and restrictions.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, the
Graph REST
Tutorial,
and/or some
/solve/graph
examples
before using this endpoint.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph resource to solve. |
weights_on_edges |
Array.<String>
|
Additional weights to apply to the edges
of an existing
graph. Weights must be specified using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing
column names, e.g.,
'table.column AS WEIGHTS_EDGE_ID',
expressions, e.g.,
'ST_LENGTH(wkt) AS
WEIGHTS_VALUESPECIFIED', or constant
values, e.g.,
'{4, 15, 2} AS WEIGHTS_VALUESPECIFIED'.
Any provided weights will be added
(in the case of
'WEIGHTS_VALUESPECIFIED') to or
multiplied with
(in the case of
'WEIGHTS_FACTORSPECIFIED') the existing
weight(s). If using
constant values in an identifier
combination, the number of values
specified
must match across the combination. |
restrictions |
Array.<String>
|
Additional restrictions to apply to the
nodes/edges of an
existing graph. Restrictions must be
specified using
identifiers;
identifiers are grouped as
combinations.
Identifiers can be used with existing column
names, e.g.,
'table.column AS RESTRICTIONS_EDGE_ID',
expressions, e.g.,
'column/2 AS RESTRICTIONS_VALUECOMPARED', or
constant values, e.g.,
'{0, 0, 0, 1} AS
RESTRICTIONS_ONOFFCOMPARED'. If using
constant values in an
identifier combination, the number of values
specified must match across the
combination. If remove_previous_restrictions
option is set
to true, any
provided restrictions will replace the
existing restrictions. Otherwise, any
provided
restrictions will be added (in the case of
'RESTRICTIONS_VALUECOMPARED') to or
replaced (in the case of
'RESTRICTIONS_ONOFFCOMPARED'). |
solver_type |
String
|
The type of solver to use for the graph.
Supported values:
- 'SHORTEST_PATH': Solves for the
optimal (shortest) path based on weights and
restrictions from one source to destinations
nodes. Also known as the Dijkstra solver.
- 'PAGE_RANK': Solves for the
probability of each destination node being
visited based on the links of the graph
topology. Weights are not required to use this
solver.
- 'PROBABILITY_RANK': Solves for the
transitional probability (Hidden Markov) for
each node based on the weights (probability
assigned over given edges).
- 'CENTRALITY': Solves for the
degree of a node to depict how many pairs of
individuals that would have to go through the
node to reach one another in the minimum number
of hops. Also known as betweenness.
- 'MULTIPLE_ROUTING': Solves for
finding the minimum cost cumulative path for a
round-trip starting from the given source and
visiting each given destination node once then
returning to the source. Also known as the
travelling salesman problem.
- 'INVERSE_SHORTEST_PATH': Solves
for finding the optimal path cost for each
destination node to route to the source node.
Also known as inverse Dijkstra or the service
man routing problem.
- 'BACKHAUL_ROUTING': Solves for
optimal routes that connect remote asset nodes
to the fixed (backbone) asset nodes.
- 'ALLPATHS': Solves for paths that
would give costs between max and min solution
radia - Make sure to limit by the
'max_solution_targets' option. Min cost shoudl
be >= shortest_path cost.
- 'STATS_ALL': Solves for graph
statistics such as graph diameter, longest
pairs, vertex valences, topology numbers,
average and max cluster sizes, etc.
- 'CLOSENESS': Solves for the
centrality closeness score per node as the sum
of the inverse shortest path costs to all nodes
in the graph.
The default value is 'SHORTEST_PATH'. |
source_nodes |
Array.<String>
|
It can be one of the nodal identifiers -
e.g: 'NODE_WKTPOINT' for source nodes. For
BACKHAUL_ROUTING , this list
depicts the fixed assets. |
destination_nodes |
Array.<String>
|
It can be one of the nodal identifiers
- e.g: 'NODE_WKTPOINT' for destination
(target) nodes. For
BACKHAUL_ROUTING , this
list depicts the remote assets. |
solution_table |
String
|
Name of the table to store the solution, in
[schema_name.]table_name format, using
standard name resolution rules. |
options |
Object
|
Additional parameters
- 'max_solution_radius': For
ALLPATHS , SHORTEST_PATH
and INVERSE_SHORTEST_PATH solvers
only. Sets the maximum solution cost radius, which
ignores the destination_nodes list and
instead outputs the nodes within the radius sorted
by ascending cost. If set to '0.0', the setting is
ignored. The default value is '0.0'.
- 'min_solution_radius': For
ALLPATHS , SHORTEST_PATH
and INVERSE_SHORTEST_PATH solvers
only. Applicable only when
max_solution_radius is set. Sets the
minimum solution cost radius, which ignores the
destination_nodes list and instead
outputs the nodes within the radius sorted by
ascending cost. If set to '0.0', the setting is
ignored. The default value is '0.0'.
- 'max_solution_targets': For
ALLPATHS , SHORTEST_PATH
and INVERSE_SHORTEST_PATH solvers
only. Sets the maximum number of solution targets,
which ignores the destination_nodes
list and instead outputs no more than n number of
nodes sorted by ascending cost where n is equal to
the setting value. If set to 0, the setting is
ignored. The default value is '1000'.
- 'uniform_weights': When specified,
assigns the given value to all the edges in the
graph. Note that weights provided in
weights_on_edges will override this
value.
- 'left_turn_penalty': This will add an
additonal weight over the edges labelled as 'left
turn' if the 'add_turn' option parameter of the
GPUdb#create_graph was invoked at
graph creation. The default value is '0.0'.
- 'right_turn_penalty': This will add an
additonal weight over the edges labelled as' right
turn' if the 'add_turn' option parameter of the
GPUdb#create_graph was invoked at
graph creation. The default value is '0.0'.
- 'intersection_penalty': This will add
an additonal weight over the edges labelled as
'intersection' if the 'add_turn' option parameter
of the
GPUdb#create_graph was invoked
at graph creation. The default value is '0.0'.
- 'sharp_turn_penalty': This will add an
additonal weight over the edges labelled as 'sharp
turn' or 'u-turn' if the 'add_turn' option
parameter of the
GPUdb#create_graph
was invoked at graph creation. The default value
is '0.0'.
- 'num_best_paths': For
MULTIPLE_ROUTING solvers only; sets
the number of shortest paths computed from each
node. This is the heuristic criterion. Default
value of zero allows the number to be computed
automatically by the solver. The user may want to
override this parameter to speed-up the solver.
The default value is '0'.
- 'max_num_combinations': For
MULTIPLE_ROUTING solvers only; sets
the cap on the combinatorial sequences generated.
If the default value of two millions is overridden
to a lesser value, it can potentially speed up the
solver. The default value is '2000000'.
- 'output_edge_path': If true then
concatenated edge ids will be added as the EDGE
path column of the solution table for each source
and target pair in shortest path solves.
Supported values:
The default value is 'false'.
- 'output_wkt_path': If true then
concatenated wkt line segments will be added as the
Wktroute column of the solution table for each
source and target pair in shortest path solves.
Supported values:
The default value is 'true'.
- 'server_id': Indicates which graph
server(s) to send the request to. Default is to
send to the server, amongst those containing the
corresponding graph, that has the most
computational bandwidth. For SHORTEST_PATH solver
type, the input is split amongst the server
containing the corresponding graph.
- 'convergence_limit': For
PAGE_RANK solvers only; Maximum
percent relative threshold on the pagerank scores
of each node between consecutive iterations to
satisfy convergence. Default value is 1 (one)
percent. The default value is '1.0'.
- 'max_iterations': For
PAGE_RANK solvers only; Maximum number
of pagerank iterations for satisfying convergence.
Default value is 100. The default value is '100'.
- 'max_runs': For all
CENTRALITY solvers only; Sets the
maximum number of shortest path runs; maximum
possible value is the number of nodes in the graph.
Default value of 0 enables this value to be auto
computed by the solver. The default value is '0'.
- 'output_clusters': For
STATS_ALL solvers only; the cluster
index for each node will be inserted as an
additional column in the output.
Supported values:
- 'true': An additional column 'CLUSTER'
will be added for each node
- 'false': No extra cluster info per
node will be available in the output
The default value is 'false'.
- 'solve_heuristic': Specify heuristic
search criterion only for the geo graphs and
shortest path solves towards a single target
Supported values:
- 'astar': Employs A-STAR heuristics to
speed up the shortest path traversal
- 'none': No heuristics are applied
The default value is 'none'.
- 'astar_radius': For path solvers only
when 'solve_heuristic' option is 'astar'. The
shortest path traversal front includes nodes only
within this radius (kilometers) as it moves towards
the target location. The default value is '70'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
solve_graph_request(request, callback) → {Promise}
Solves an existing graph for a type of problem (e.g., shortest path,
page rank, travelling salesman, etc.) using source nodes, destination nodes,
and
additional, optional weights and restrictions.
IMPORTANT: It's highly recommended that you review the
Network
Graphs & Solvers
concepts documentation, the
Graph REST
Tutorial,
and/or some
/solve/graph
examples
before using this endpoint.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
submit_request(endpoint, request, callbackopt) → {Promise}
Submits an arbitrary request to GPUdb. The response will be returned via the
specified callback function, or via a promise if no callback function is
provided.
Parameters:
Name |
Type |
Attributes |
Description |
endpoint |
String
|
|
The endpoint to which to submit the request. |
request |
Object
|
|
The request object to submit. |
callback |
GPUdbCallback
|
<optional>
|
The callback function. |
- Source:
Returns:
A promise that will be fulfilled with the response object,
if no callback function is provided.
-
Type
-
Promise
update_records(table_name, expressions, new_values_maps, data, options, callback) → {Promise}
Runs multiple predicate-based updates in a single call. With the
list of given expressions, any matching record's column values will be
updated
as provided in
new_values_maps
. There is also an optional
'upsert'
capability where if a particular predicate doesn't match any existing
record,
then a new record can be inserted.
Note that this operation can only be run on an original table and not on a
result view.
This operation can update primary key values. By default only
'pure primary key' predicates are allowed when updating primary key values.
If
the primary key for a table is the column 'attr1', then the operation will
only
accept predicates of the form: "attr1 == 'foo'" if the attr1 column is being
updated. For a composite primary key (e.g. columns 'attr1' and 'attr2')
then
this operation will only accept predicates of the form:
"(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary key columns
must appear in an equality predicate in the expressions. Furthermore each
'pure primary key' predicate must be unique within a given request. These
restrictions can be removed by utilizing some available options through
options
.
The update_on_existing_pk
option specifies the record primary
key collision
policy for tables with a primary key, while
ignore_existing_pk
specifies the record primary key collision
error-suppression policy when those collisions result in the update being
rejected. Both are
ignored on tables with no primary key.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of table to be updated, in
[schema_name.]table_name format, using standard
name resolution rules. Must
be a currently
existing table and not a view. |
expressions |
Array.<String>
|
A list of the actual predicates, one for each
update; format should follow the guidelines
here . |
new_values_maps |
Array.<Object>
|
List of new values for the matching
records. Each element is a map with
(key, value) pairs where the keys are the
names of the columns whose values are to
be updated; the
values are the new values. The number of
elements in the list should match the
length of expressions . |
data |
Array.<Object>
|
An optional list of JSON encoded objects to insert,
one for each update, to be added if the particular
update did not match any objects. |
options |
Object
|
Optional parameters.
- 'global_expression': An optional
global expression to reduce the search space of the
predicates listed in
expressions . The
default value is ''.
- 'bypass_safety_checks': When set to
true ,
all predicates are available for primary key
updates. Keep in mind that it is possible to
destroy
data in this case, since a single predicate may
match multiple objects (potentially all of records
of a table), and then updating all of those records
to have the same primary key will, due to the
primary key uniqueness constraints, effectively
delete all but one of those updated records.
Supported values:
The default value is 'false'.
- 'update_on_existing_pk': Specifies the
record collision policy for updating a table with a
primary key. There are two ways
that a record collision can
occur.
The first is an "update collision", which happens
when the update changes the value of the updated
record's primary key, and that new primary key
already exists as the primary key of another record
in the table.
The second is an "insert collision", which occurs
when a given filter in
expressions
finds no records to update, and the alternate
insert record given in
records_to_insert (or
records_to_insert_str ) contains a
primary key matching that of an existing record in
the
table.
If update_on_existing_pk is set to
true , "update collisions" will result
in the
existing record collided into being removed and the
record updated with values specified in
new_values_maps taking its place;
"insert collisions" will result in the
collided-into
record being updated with the values in
records_to_insert /records_to_insert_str
(if given).
If set to false , the existing
collided-into
record will remain unchanged, while the update will
be rejected and the error handled as determined
by ignore_existing_pk . If the
specified table does not have a primary key,
then this option has no effect.
Supported values:
- 'true': Overwrite the collided-into
record when updating a
record's primary key or inserting an alternate
record causes a primary key collision between the
record being updated/inserted and another existing
record in the table
- 'false': Reject updates which cause
primary key collisions
between the record being updated/inserted and an
existing record in the table
The default value is 'false'.
- 'ignore_existing_pk': Specifies the
record collision error-suppression policy for
updating a table with a primary key, only used when
primary
key record collisions are rejected
(
update_on_existing_pk is
false ). If set to
true , any record update that is
rejected for
resulting in a primary key collision with an
existing table record will be ignored with no error
generated. If false , the rejection of
any update
for resulting in a primary key collision will cause
an error to be reported. If the specified table
does not have a primary key or if
update_on_existing_pk is
true , then this option has no effect.
Supported values:
- 'true': Ignore updates that result in
primary key collisions with existing records
- 'false': Treat as errors any updates
that result in primary key collisions with existing
records
The default value is 'false'.
- 'update_partition': Force qualifying
records to be deleted and reinserted so their
partition membership will be reevaluated.
Supported values:
The default value is 'false'.
- 'truncate_strings': If set to
true , any strings which are too long
for their charN string fields will be truncated to
fit.
Supported values:
The default value is 'false'.
- 'use_expressions_in_new_values_maps':
When set to
true ,
all new values in new_values_maps are
considered as expression values. When set to
false , all new values in
new_values_maps are considered as
constants. NOTE: When
true , string constants will need
to be quoted to avoid being evaluated as
expressions.
Supported values:
The default value is 'false'.
- 'record_id': ID of a single record to
be updated (returned in the call to
GPUdb#insert_records or
GPUdb#get_records_from_collection ).
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
update_records_by_series(table_name, world_table_name, view_name, reserved, options, callback) → {Promise}
Updates the view specified by table_name
to include full
series (track) information from the world_table_name
for the
series
(tracks) present in the view_name
.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the view on which the update operation
will be performed, in [schema_name.]view_name
format, using standard name resolution rules. Must
be an existing view. |
world_table_name |
String
|
Name of the table containing the complete
series (track) information, in
[schema_name.]table_name format, using
standard name resolution rules. |
view_name |
String
|
Name of the view containing the series (tracks)
which have to be updated, in
[schema_name.]view_name format, using standard name resolution rules. |
reserved |
Array.<String>
|
|
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
update_records_by_series_request(request, callback) → {Promise}
Updates the view specified by table_name
to include full
series (track) information from the world_table_name
for the
series
(tracks) present in the view_name
.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
update_records_request(request, callback) → {Promise}
Runs multiple predicate-based updates in a single call. With the
list of given expressions, any matching record's column values will be
updated
as provided in
new_values_maps
. There is also an optional
'upsert'
capability where if a particular predicate doesn't match any existing
record,
then a new record can be inserted.
Note that this operation can only be run on an original table and not on a
result view.
This operation can update primary key values. By default only
'pure primary key' predicates are allowed when updating primary key values.
If
the primary key for a table is the column 'attr1', then the operation will
only
accept predicates of the form: "attr1 == 'foo'" if the attr1 column is being
updated. For a composite primary key (e.g. columns 'attr1' and 'attr2')
then
this operation will only accept predicates of the form:
"(attr1 == 'foo') and (attr2 == 'bar')". Meaning, all primary key columns
must appear in an equality predicate in the expressions. Furthermore each
'pure primary key' predicate must be unique within a given request. These
restrictions can be removed by utilizing some available options through
options
.
The update_on_existing_pk
option specifies the record primary
key collision
policy for tables with a primary key, while
ignore_existing_pk
specifies the record primary key collision
error-suppression policy when those collisions result in the update being
rejected. Both are
ignored on tables with no primary key.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
upload_files(file_names, file_data, options, callback) → {Promise}
Uploads one or more files to
KiFS. There are
two methods for uploading files: load files in their entirety, or load files
in
parts. The latter is recommeded for files of approximately 60 MB or larger.
To upload files in their entirety, populate file_names
with the
file
names to upload into on KiFS, and their respective byte content in
file_data
.
Multiple steps are involved when uploading in multiple parts. Only one file
at a
time can be uploaded in this manner. A user-provided UUID is utilized to tie
all
the upload steps together for a given file. To upload a file in multiple
parts:
1. Provide the file name in file_names
, the UUID in
the multipart_upload_uuid
key in options
, and
a multipart_operation
value of
init
.
2. Upload one or more parts by providing the file name, the part data
in file_data
, the UUID, a multipart_operation
value of upload_part
, and
the part number in the multipart_upload_part_number
.
The part numbers must start at 1 and increase incrementally.
Parts may not be uploaded out of order.
3. Complete the upload by providing the file name, the UUID, and a
multipart_operation
value of
complete
.
Multipart uploads in progress may be canceled by providing the file name,
the
UUID, and a multipart_operation
value of
cancel
. If an new upload is
initialized with a different UUID for an existing upload in progress, the
pre-existing upload is automatically canceled in favor of the new upload.
The multipart upload must be completed for the file to be usable in KiFS.
Information about multipart uploads in progress is available in
GPUdb#show_files
.
File data may be pre-encoded using base64 encoding. This should be indicated
using the file_encoding
option, and is recommended when
using JSON serialization.
Each file path must reside in a top-level KiFS directory, i.e. one of the
directories listed in GPUdb#show_directories
. The user must
have write
permission on the directory. Nested directories are permitted in file name
paths. Directories are deliniated with the directory separator of '/'. For
example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.
These characters are allowed in file name paths: letters, numbers, spaces,
the
path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#'
'='.
Parameters:
Name |
Type |
Description |
file_names |
Array.<String>
|
An array of full file name paths to be used
for the files
uploaded to KiFS. File names may have any
number of nested directories in their
paths, but the top-level directory must be an
existing KiFS directory. Each file
must reside in or under a top-level directory.
A full file name path cannot be
larger than 1024 characters. |
file_data |
Array.<String>
|
File data for the files being uploaded, for the
respective files in file_names . |
options |
Object
|
Optional parameters.
- 'file_encoding': Encoding that has
been applied to the uploaded
file data. When using JSON serialization it is
recommended to utilize
base64 . The caller is responsible
for encoding the data provided in this payload
Supported values:
- 'base64': Specifies that the file data
being uploaded has been base64 encoded.
- 'none': The uploaded file data has not
been encoded.
The default value is 'none'.
- 'multipart_operation': Multipart
upload operation to perform
Supported values:
- 'none': Default, indicates this is not
a multipart upload
- 'init': Initialize a multipart file
upload
- 'upload_part': Uploads a part of the
specified multipart file upload
- 'complete': Complete the specified
multipart file upload
- 'cancel': Cancel the specified
multipart file upload
The default value is 'none'.
- 'multipart_upload_uuid': UUID to
uniquely identify a multipart upload
- 'multipart_upload_part_number':
Incremental part number for each part in a
multipart upload. Part numbers start at 1,
increment by 1, and must be uploaded
sequentially
- 'delete_if_exists': If
true ,
any existing files specified in
file_names will be deleted prior to
start of upload.
Otherwise the file is replaced once the upload
completes. Rollback of the original file is
no longer possible if the upload is cancelled,
aborted or fails if the file was deleted
beforehand.
Supported values:
The default value is 'false'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
upload_files_fromurl(file_names, urls, options, callback) → {Promise}
Uploads one or more files to
KiFS.
Each file path must reside in a top-level KiFS directory, i.e. one of the
directories listed in GPUdb#show_directories
. The user must
have write
permission on the directory. Nested directories are permitted in file name
paths. Directories are deliniated with the directory separator of '/'. For
example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.
These characters are allowed in file name paths: letters, numbers, spaces,
the
path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#'
'='.
Parameters:
Name |
Type |
Description |
file_names |
Array.<String>
|
An array of full file name paths to be used
for the files
uploaded to KiFS. File names may have any
number of nested directories in their
paths, but the top-level directory must be an
existing KiFS directory. Each file
must reside in or under a top-level directory.
A full file name path cannot be
larger than 1024 characters. |
urls |
Array.<String>
|
List of URLs to upload, for each respective file in
file_names . |
options |
Object
|
Optional parameters. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
upload_files_fromurl_request(request, callback) → {Promise}
Uploads one or more files to
KiFS.
Each file path must reside in a top-level KiFS directory, i.e. one of the
directories listed in GPUdb#show_directories
. The user must
have write
permission on the directory. Nested directories are permitted in file name
paths. Directories are deliniated with the directory separator of '/'. For
example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.
These characters are allowed in file name paths: letters, numbers, spaces,
the
path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#'
'='.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
upload_files_request(request, callback) → {Promise}
Uploads one or more files to
KiFS. There are
two methods for uploading files: load files in their entirety, or load files
in
parts. The latter is recommeded for files of approximately 60 MB or larger.
To upload files in their entirety, populate file_names
with the
file
names to upload into on KiFS, and their respective byte content in
file_data
.
Multiple steps are involved when uploading in multiple parts. Only one file
at a
time can be uploaded in this manner. A user-provided UUID is utilized to tie
all
the upload steps together for a given file. To upload a file in multiple
parts:
1. Provide the file name in file_names
, the UUID in
the multipart_upload_uuid
key in options
, and
a multipart_operation
value of
init
.
2. Upload one or more parts by providing the file name, the part data
in file_data
, the UUID, a multipart_operation
value of upload_part
, and
the part number in the multipart_upload_part_number
.
The part numbers must start at 1 and increase incrementally.
Parts may not be uploaded out of order.
3. Complete the upload by providing the file name, the UUID, and a
multipart_operation
value of
complete
.
Multipart uploads in progress may be canceled by providing the file name,
the
UUID, and a multipart_operation
value of
cancel
. If an new upload is
initialized with a different UUID for an existing upload in progress, the
pre-existing upload is automatically canceled in favor of the new upload.
The multipart upload must be completed for the file to be usable in KiFS.
Information about multipart uploads in progress is available in
GPUdb#show_files
.
File data may be pre-encoded using base64 encoding. This should be indicated
using the file_encoding
option, and is recommended when
using JSON serialization.
Each file path must reside in a top-level KiFS directory, i.e. one of the
directories listed in GPUdb#show_directories
. The user must
have write
permission on the directory. Nested directories are permitted in file name
paths. Directories are deliniated with the directory separator of '/'. For
example, given the file path '/a/b/c/d.txt', 'a' must be a KiFS directory.
These characters are allowed in file name paths: letters, numbers, spaces,
the
path delimiter of '/', and the characters: '.' '-' ':' '[' ']' '(' ')' '#'
'='.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
visualize_image_chart(table_name, x_column_names, y_column_names, min_x, max_x, min_y, max_y, width, height, bg_color, style_options, options, callback) → {Promise}
Scatter plot is the only plot type currently supported. A non-numeric column
can be specified as x or y column and jitters can be added to them to avoid
excessive overlapping. All color values must be in the format RRGGBB or
AARRGGBB (to specify the alpha value).
The image is contained in the image_data
field.
Parameters:
Name |
Type |
Description |
table_name |
String
|
Name of the table containing the data to be
drawn as a chart, in [schema_name.]table_name
format, using standard name resolution rules. |
x_column_names |
Array.<String>
|
Names of the columns containing the data
mapped to the x axis of a chart. |
y_column_names |
Array.<String>
|
Names of the columns containing the data
mapped to the y axis of a chart. |
min_x |
Number
|
Lower bound for the x column values. For non-numeric
x column, each x column item is mapped to an integral
value starting from 0. |
max_x |
Number
|
Upper bound for the x column values. For non-numeric
x column, each x column item is mapped to an integral
value starting from 0. |
min_y |
Number
|
Lower bound for the y column values. For non-numeric
y column, each y column item is mapped to an integral
value starting from 0. |
max_y |
Number
|
Upper bound for the y column values. For non-numeric
y column, each y column item is mapped to an integral
value starting from 0. |
width |
Number
|
Width of the generated image in pixels. |
height |
Number
|
Height of the generated image in pixels. |
bg_color |
String
|
Background color of the generated image. |
style_options |
Object
|
Rendering style options for a chart.
- 'pointcolor': The color of
points in the plot represented as a
hexadecimal number. The default value is
'0000FF'.
- 'pointsize': The size of points
in the plot represented as number of pixels.
The default value is '3'.
- 'pointshape': The shape of
points in the plot.
Supported values:
- 'none'
- 'circle'
- 'square'
- 'diamond'
- 'hollowcircle'
- 'hollowsquare'
- 'hollowdiamond'
The default value is 'square'.
- 'cb_pointcolors': Point color
class break information consisting of three
entries: class-break attribute, class-break
values/ranges, and point color values. This
option overrides the pointcolor option if
both are provided. Class-break ranges are
represented in the form of "min:max".
Class-break values/ranges and point color
values are separated by cb_delimiter, e.g.
{"price", "20:30;30:40;40:50",
"0xFF0000;0x00FF00;0x0000FF"}.
- 'cb_pointsizes': Point size
class break information consisting of three
entries: class-break attribute, class-break
values/ranges, and point size values. This
option overrides the pointsize option if both
are provided. Class-break ranges are
represented in the form of "min:max".
Class-break values/ranges and point size
values are separated by cb_delimiter, e.g.
{"states", "NY;TX;CA", "3;5;7"}.
- 'cb_pointshapes': Point shape
class break information consisting of three
entries: class-break attribute, class-break
values/ranges, and point shape names. This
option overrides the pointshape option if
both are provided. Class-break ranges are
represented in the form of "min:max".
Class-break values/ranges and point shape
names are separated by cb_delimiter, e.g.
{"states", "NY;TX;CA",
"circle;square;diamond"}.
- 'cb_delimiter': A character or
string which separates per-class values in a
class-break style option string. The default
value is ';'.
- 'x_order_by': An expression or
aggregate expression by which non-numeric x
column values are sorted, e.g. "avg(price)
descending".
- 'y_order_by': An expression or
aggregate expression by which non-numeric y
column values are sorted, e.g. "avg(price)",
which defaults to "avg(price) ascending".
- 'scale_type_x': Type of x axis
scale.
Supported values:
- 'none': No scale is applied to
the x axis.
- 'log': A base-10 log scale is
applied to the x axis.
The default value is 'none'.
- 'scale_type_y': Type of y axis
scale.
Supported values:
- 'none': No scale is applied to
the y axis.
- 'log': A base-10 log scale is
applied to the y axis.
The default value is 'none'.
- 'min_max_scaled': If this
options is set to "false", this endpoint
expects request's min/max values are not yet
scaled. They will be scaled according to
scale_type_x or scale_type_y for response. If
this options is set to "true", this endpoint
expects request's min/max values are already
scaled according to
scale_type_x/scale_type_y. Response's min/max
values will be equal to request's min/max
values. The default value is 'false'.
- 'jitter_x': Amplitude of
horizontal jitter applied to non-numeric x
column values. The default value is '0.0'.
- 'jitter_y': Amplitude of
vertical jitter applied to non-numeric y
column values. The default value is '0.0'.
- 'plot_all': If this options is
set to "true", all non-numeric column values
are plotted ignoring min_x, max_x, min_y and
max_y parameters. The default value is
'false'.
|
options |
Object
|
Optional parameters.
- 'image_encoding': Encoding to be
applied to the output image. When using JSON
serialization it is recommended to specify this as
base64 .
Supported values:
- 'base64': Apply base64 encoding to the
output image.
- 'none': Do not apply any additional
encoding to the output image.
The default value is 'none'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
visualize_image_chart_request(request, callback) → {Promise}
Scatter plot is the only plot type currently supported. A non-numeric column
can be specified as x or y column and jitters can be added to them to avoid
excessive overlapping. All color values must be in the format RRGGBB or
AARRGGBB (to specify the alpha value).
The image is contained in the image_data
field.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
visualize_isochrone(graph_name, source_node, max_solution_radius, weights_on_edges, restrictions, num_levels, generate_image, levels_table, style_options, solve_options, contour_options, options, callback) → {Promise}
Generate an image containing isolines for travel results using an
existing graph. Isolines represent curves of equal cost, with cost typically
referring to the time or distance assigned as the weights of the underlying
graph. See
Network
Graphs & Solvers
for more information on graphs.
Parameters:
Name |
Type |
Description |
graph_name |
String
|
Name of the graph on which the isochrone is to
be computed. |
source_node |
String
|
Starting vertex on the underlying graph from/to
which the isochrones are created. |
max_solution_radius |
Number
|
Extent of the search radius around
source_node . Set to '-1.0'
for unrestricted search radius. |
weights_on_edges |
Array.<String>
|
Additional weights to apply to the edges
of an existing graph. Weights must be
specified using identifiers;
identifiers are grouped as combinations.
Identifiers can be used with existing
column names, e.g., 'table.column AS
WEIGHTS_EDGE_ID', or expressions, e.g.,
'ST_LENGTH(wkt) AS
WEIGHTS_VALUESPECIFIED'. Any provided
weights will be added (in the case of
'WEIGHTS_VALUESPECIFIED') to or
multiplied with (in the case of
'WEIGHTS_FACTORSPECIFIED') the existing
weight(s). |
restrictions |
Array.<String>
|
Additional restrictions to apply to the
nodes/edges of an existing graph.
Restrictions must be specified using identifiers; identifiers
are grouped as combinations. Identifiers
can be used with existing column names,
e.g., 'table.column AS
RESTRICTIONS_EDGE_ID', or expressions, e.g.,
'column/2 AS RESTRICTIONS_VALUECOMPARED'. If
remove_previous_restrictions is
set to true , any provided
restrictions will replace the existing
restrictions. If
remove_previous_restrictions is
set to false , any provided
restrictions will be added (in the case of
'RESTRICTIONS_VALUECOMPARED') to or replaced
(in the case of
'RESTRICTIONS_ONOFFCOMPARED'). |
num_levels |
Number
|
Number of equally-separated isochrones to
compute. |
generate_image |
Boolean
|
If set to true , generates a
PNG image of the isochrones in the
response.
Supported values:
The default value is true. |
levels_table |
String
|
Name of the table to output the isochrones to,
in [schema_name.]table_name format, using
standard name resolution rules and
meeting table naming criteria. The
table will contain levels and their
corresponding WKT geometry. If no value is
provided, the table is not generated. |
style_options |
Object
|
Various style related options of the
isochrone image.
- 'line_size': The width of the
contour lines in pixels. The default value
is '3'.
- 'color': Color of generated
isolines. All color values must be in the
format RRGGBB or AARRGGBB (to specify the
alpha value). If alpha is specified and
flooded contours are enabled, it will be used
for as the transparency of the latter. The
default value is 'FF696969'.
- 'bg_color': When
generate_image is set to
true , background color of the
generated image. All color values must be in
the format RRGGBB or AARRGGBB (to specify the
alpha value). The default value is
'00000000'.
- 'text_color': When
add_labels is set to
true , color for the labels. All
color values must be in the format RRGGBB or
AARRGGBB (to specify the alpha value). The
default value is 'FF000000'.
- 'colormap': Colormap for
contours or fill-in regions when applicable.
All color values must be in the format RRGGBB
or AARRGGBB (to specify the alpha value)
Supported values:
- 'jet'
- 'accent'
- 'afmhot'
- 'autumn'
- 'binary'
- 'blues'
- 'bone'
- 'brbg'
- 'brg'
- 'bugn'
- 'bupu'
- 'bwr'
- 'cmrmap'
- 'cool'
- 'coolwarm'
- 'copper'
- 'cubehelix'
- 'dark2'
- 'flag'
- 'gist_earth'
- 'gist_gray'
- 'gist_heat'
- 'gist_ncar'
- 'gist_rainbow'
- 'gist_stern'
- 'gist_yarg'
- 'gnbu'
- 'gnuplot2'
- 'gnuplot'
- 'gray'
- 'greens'
- 'greys'
- 'hot'
- 'hsv'
- 'inferno'
- 'magma'
- 'nipy_spectral'
- 'ocean'
- 'oranges'
- 'orrd'
- 'paired'
- 'pastel1'
- 'pastel2'
- 'pink'
- 'piyg'
- 'plasma'
- 'prgn'
- 'prism'
- 'pubu'
- 'pubugn'
- 'puor'
- 'purd'
- 'purples'
- 'rainbow'
- 'rdbu'
- 'rdgy'
- 'rdpu'
- 'rdylbu'
- 'rdylgn'
- 'reds'
- 'seismic'
- 'set1'
- 'set2'
- 'set3'
- 'spectral'
- 'spring'
- 'summer'
- 'terrain'
- 'viridis'
- 'winter'
- 'wistia'
- 'ylgn'
- 'ylgnbu'
- 'ylorbr'
- 'ylorrd'
The default value is 'jet'.
|
solve_options |
Object
|
Solver specific parameters
- 'remove_previous_restrictions':
Ignore the restrictions applied to the graph
during the creation stage and only use the
restrictions specified in this request if set
to
true .
Supported values:
The default value is 'false'.
- 'restriction_threshold_value':
Value-based restriction comparison. Any node
or edge with a 'RESTRICTIONS_VALUECOMPARED'
value greater than the
restriction_threshold_value will
not be included in the solution.
- 'uniform_weights': When
specified, assigns the given value to all the
edges in the graph. Note that weights
provided in
weights_on_edges
will override this value.
|
contour_options |
Object
|
Solver specific parameters
- 'projection': Spatial
Reference System (i.e. EPSG Code).
Supported values:
- '3857'
- '102100'
- '900913'
- 'EPSG:4326'
- 'PLATE_CARREE'
- 'EPSG:900913'
- 'EPSG:102100'
- 'EPSG:3857'
- 'WEB_MERCATOR'
The default value is 'PLATE_CARREE'.
- 'width': When
generate_image is set to
true , width of the generated
image. The default value is '512'.
- 'height': When
generate_image is set to
true , height of the generated
image. If the default value is used, the
height is set to the value
resulting from multiplying the aspect ratio
by the width . The default
value is '-1'.
- 'search_radius': When
interpolating the graph solution to
generate the isochrone, neighborhood of
influence of sample data (in percent of the
image/grid). The default value is '20'.
- 'grid_size': When
interpolating the graph solution to
generate the isochrone, number of
subdivisions along the x axis when building
the grid (the y is computed using the
aspect ratio of the output image). The
default value is '100'.
- 'color_isolines': Color each
isoline according to the colormap;
otherwise, use the foreground color.
Supported values:
The default value is 'true'.
- 'add_labels': If set to
true , add labels to the
isolines.
Supported values:
The default value is 'false'.
- 'labels_font_size': When
add_labels is set to
true , size of the font (in
pixels) to use for labels. The default
value is '12'.
- 'labels_font_family': When
add_labels is set to
true , font name to be used
when adding labels. The default value is
'arial'.
- 'labels_search_window': When
add_labels is set to
true , a search window is used
to rate the local quality of each isoline.
Smooth, continuous, long stretches with
relatively flat angles are favored. The
provided value is multiplied by the
labels_font_size to calculate
the final window size. The default value
is '4'.
-
'labels_intralevel_separation': When
add_labels is set to
true , this value determines
the distance (in multiples of the
labels_font_size ) to use when
separating labels of different values. The
default value is '4'.
-
'labels_interlevel_separation': When
add_labels is set to
true , this value determines
the distance (in percent of the total
window size) to use when separating labels
of the same value. The default value is
'20'.
- 'labels_max_angle': When
add_labels is set to
true , maximum angle (in
degrees) from the vertical to use when
adding labels. The default value is '60'.
|
options |
Object
|
Additional parameters
- 'solve_table': Name of the table to
host intermediate solve results, in
[schema_name.]table_name format, using standard name resolution rules and meeting
table naming criteria. This
table will contain the position and cost for each
vertex in the graph. If the default value is used,
a temporary table is created and deleted once the
solution is calculated. The default value is ''.
- 'is_replicated': If set to
true , replicate the
solve_table .
Supported values:
The default value is 'true'.
- 'data_min_x': Lower bound for the x
values. If not provided, it will be computed from
the bounds of the input data.
- 'data_max_x': Upper bound for the x
values. If not provided, it will be computed from
the bounds of the input data.
- 'data_min_y': Lower bound for the y
values. If not provided, it will be computed from
the bounds of the input data.
- 'data_max_y': Upper bound for the y
values. If not provided, it will be computed from
the bounds of the input data.
- 'concavity_level': Factor to qualify
the concavity of the isochrone curves. The lower
the value, the more convex (with '0' being
completely convex and '1' being the most concave).
The default value is '0.5'.
- 'use_priority_queue_solvers': sets the
solver methods explicitly if true
Supported values:
- 'true': uses the solvers scheduled for
'shortest_path' and 'inverse_shortest_path' based
on solve_direction
- 'false': uses the solvers
'priority_queue' and 'inverse_priority_queue' based
on solve_direction
The default value is 'false'.
- 'solve_direction': Specify whether we
are going to the source node, or starting from it.
Supported values:
- 'from_source': Shortest path to get to
the source (inverse Dijkstra)
- 'to_source': Shortest path to source
(Dijkstra)
The default value is 'from_source'.
|
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
visualize_isochrone_request(request, callback) → {Promise}
Generate an image containing isolines for travel results using an
existing graph. Isolines represent curves of equal cost, with cost typically
referring to the time or distance assigned as the weights of the underlying
graph. See
Network
Graphs & Solvers
for more information on graphs.
Parameters:
Name |
Type |
Description |
request |
Object
|
Request object containing the parameters for the
operation. |
callback |
GPUdbCallback
|
Callback that handles the response. |
- Source:
Returns:
A promise that will be fulfilled with the response
object, if no callback function is provided.
-
Type
-
Promise
wms_request(request, callbackopt) → {Promise}
Request a WMS (Web Map Service) rasterized image. The image will be returned
as a Node.js Buffer object via the specified callback function, or via a
promise if no callback function is provided.
Parameters:
Name |
Type |
Attributes |
Description |
request |
Object
|
|
Object containing WMS parameters. |
callback |
GPUdbCallback
|
<optional>
|
The callback function. |
- Source:
Returns:
A promise that will be fulfilled with the image, if no
callback function is provided.
-
Type
-
Promise
(static) decode(o) → {Object|Array.<Object>}
Decodes a JSON string, or array of JSON strings, returned from GPUdb into
JSON object(s).
Parameters:
Name |
Type |
Description |
o |
String
|
Array.<String>
|
The JSON string(s) to decode. |
- Source:
Returns:
The decoded JSON object(s).
-
Type
-
Object
|
Array.<Object>
(static) decode_no_inf_nan(o) → {Object|Array.<Object>}
Decodes a JSON string, or array of JSON strings, returned from GPUdb into
JSON object(s). Special treatment for quoted "Infinity", "-Infinity",
and "NaN". Catches those and converts to null. This is significantly
slower than the regular decode function.
Parameters:
Name |
Type |
Description |
o |
String
|
Array.<String>
|
The JSON string(s) to decode. |
- Source:
Returns:
The decoded JSON object(s).
-
Type
-
Object
|
Array.<Object>
(static) decode_regular(o) → {Object|Array.<Object>}
Decodes a JSON string, or array of JSON strings, returned from GPUdb into
JSON object(s).
Parameters:
Name |
Type |
Description |
o |
String
|
Array.<String>
|
The JSON string(s) to decode. |
- Source:
Returns:
The decoded JSON object(s).
-
Type
-
Object
|
Array.<Object>
(static) encode(o) → {String|Array.<String>}
Encodes a JSON object, or array of JSON objects, into JSON string(s) to be
passed to GPUdb.
Parameters:
Name |
Type |
Description |
o |
Object
|
Array.<Object>
|
The JSON object(s) to encode. |
- Source:
Returns:
The encoded JSON string(s).
-
Type
-
String
|
Array.<String>