Configuration Reference

Kinetica configuration consists of several user-modifiable system parameters present in /opt/gpudb/core/etc/gpudb.conf that are used to set up and tune the system.


Version

Configuration ParameterDefault ValueDescription
file_version7.1.3.0.20210119172644The current version of this configuration. This param is automatically updated by Kinetica and should not be edited manually.


Identification

Configuration ParameterDefault ValueDescription
ring_namering0Name of the ring containing the high availability clusters.
cluster_namecluster0Name of the cluster containing all the nodes of the database.


Hosts

Specify each unique host in the cluster. Each host runs a local instance of HostManager, which manages database services and and components that are part of the Kinetica install. HostManager is also responsible for orchestrating intra-cluster failover and process switchover.

Host settings are defined as follows:

host<#>.<parameter>

Any number of hosts may be configured, starting with host0, and must be specified in consecutive order (e.g., host0, host1, host2).

Note

For public_address & host_manager_public_url, if either parameter is to be specified, that parameter must be specified for all hosts.

Configuration ParameterDefault ValueDescription
host0.address127.0.0.1

Host parameters include the following:

  • address : The unique address for this host. Single host clusters may use 127.0.0.1. This field is required.
  • public_address : An optional public address that clients should use when performing multi-head operations.
  • host_manager_public_url: An optional public URL that can be used to access HostManager endpoints. See host_manager_http_port.
  • ram_limit : An optional upper limit for RAM (in bytes) on this host, capping the amount of memory that all Kinetica processes are allowed to consume. This is especially important for intra-cluster failover: if the limit is too low, it may not be possible to completely re-distribute all failed processes from a degraded host if sufficient resources are not available to accommodate their resource requirements. Use -1 for no limit.
  • gpus : An optional comma-separated list of GPUs (as reported by NVML) that may be reserved by Kinetica services (e.g., ranks, graph server, ML, etc.). Leave empty to make all GPUs available for use. The same failover caveat with the RAM limit also applies.
  • accepts_failover : Whether or not this host should accept failed processes that need to be migrated off degraded hosts. Default is to not accept failed processes.
host0.public_address 
host0.host_manager_public_url 
host0.ram_limit-1
host0.gpus 
host0.accepts_failoverfalse


Ranks

Configuration ParameterDefault ValueDescription
rank0.hosthost0Specify the host on which to run each rank worker process in the cluster. Multiple ranks may run on the same host. If a rank is specified with an empty host entry, it is effectively disabled (i.e., removed from the cluster) and no persisted data for this rank will be loaded even if present on any host. (e.g. 'rank2.host = host1' to run rank2 on host1)
rank1.hosthost0
rank2.hosthost0


Network

Configuration ParameterDefault ValueDescription
head_ip_address${gaia.host0.address}Head HTTP server IP address. Set to the publicly accessible IP address of the first process, rank0.
head_port9191Head HTTP server port to use for head_ip_address.
use_httpsfalseSet to true to use HTTPS; if true then https_key_file and https_cert_file must be provided
https_key_file 

Files containing the SSL private Key and the SSL certificate for. If required, a self signed certificate (expires after 10 years) can be generated via the command:

openssl req -newkey rsa:2048 -new -nodes -x509 \
  -days 3650 -keyout key.pem -out cert.pem
https_cert_file 
http_allow_origin*Value to return via Access-Control-Allow-Origin HTTP header (for Cross-Origin Resource Sharing). Set to empty to not return the header and disallow CORS.
http_keep_alivefalseKeep HTTP connections alive between requests
enable_httpd_proxyfalse

Start an HTTP server as a proxy to handle LDAP and/or Kerberos authentication. Each host will run an HTTP server and access to each rank is available through http://host:8082/gpudb-1, where port 8082 is defined by httpd_proxy_port.

Note

HTTP external endpoints are not affected by the use_https parameter above. If you wish to enable HTTPS, you must edit the /opt/gpudb/httpd/conf/httpd.conf and setup HTTPS as per the Apache httpd documentation at https://httpd.apache.org/docs/2.2/

httpd_proxy_port8082TCP port that the httpd auth proxy server will listen on if enable_httpd_proxy is true.
httpd_proxy_use_httpsfalseSet to true if the httpd auth proxy server is configured to use HTTPS.
trigger_port9001Trigger ZMQ publisher server port (-1 to disable), uses the head_ip_address interface.
set_monitor_port9002Set monitor ZMQ publisher server port (-1 to disable), uses the head_ip_address interface.
set_monitor_proxy_port9003

Set monitor ZMQ publisher internal proxy server port (-1 to disable), uses the head_ip_address interface.

Important

Disabling this port effectively prevents worker nodes from publishing set monitor notifications when multi-head ingest is enabled (see enable_worker_http_servers).

enable_revealtrueEnable Reveal runtime
global_manager_port_one5552Internal communication ports
global_manager_pub_port5553Host manager synchronization port
global_manager_local_pub_port5554Local host manager port
host_manager_http_port9300HTTP port for web portal of the host manager
enable_worker_http_serverstrueEnable worker HTTP servers; each process runs its own server for multi-head ingest.
rank1.worker_http_server_port9192Optionally, specify the worker HTTP server ports. The default is to use (head_port + rank #) for each worker process where rank number is from 1 to number of ranks in rank<#>.host below.
rank2.worker_http_server_port9193
rank0.public_url 

Optionally, specify a public URL for each worker HTTP server that clients should use to connect for multi-head operations.

Note

If specified for any ranks, a public URL must be specified for all ranks. Cannot be used in conjunction with nplus1.

rank1.public_url 
rank2.public_url 
rank0.communicator_port6555Specify the TCP ports each rank will use to communicate with the others. If the port for any rank<#> is not specified the port will be assigned to rank0.communicator_port + rank #.
rank1.communicator_port6556
rank2.communicator_port6557
compress_network_datafalseEnables compression of inter-node network data transfers.


Security

Configuration ParameterDefault ValueDescription
require_authenticationfalseRequire authentication.
enable_authorizationfalseEnable authorization checks.
min_password_length0Minimum password length.
enable_external_authenticationfalse

Enable external (LDAP, Kerberos, etc.) authentication. User IDs of externally-authenticated users must be passed in via the REMOTE_USER HTTP header from the authentication proxy. May be used in conjuntion with the enable_httpd_proxy setting above for an integrated external authentication solution.

Important

DO NOT ENABLE unless external access to Kinetica ports has been blocked via firewall AND the authentication proxy is configured to block REMOTE_USER HTTP headers passed in from clients.

external_authentication_handshake_key Encrypted key that, if specified, must be passed in unencrypted via the KINETICA_HANDSHAKE_KEY HTTP header from the authentication proxy if a REMOTE_USER HTTP header is also passed in. A missing or incorrect handshake key will result in rejection of the request.
auto_create_external_usersfalseAutomatically create accounts for externally-authenticated users. If enable_external_authentication is false, this setting has no effect. Note that accounts are not automatically deleted if users are removed from the external authentication provider and will be orphaned.
auto_grant_external_rolesfalse

Automatically add roles passed in via the KINETICA_ROLES HTTP header to externally-authenticated users. Specified roles that do not exist are ignored. If enable_external_authentication is false, this setting has no effect.

Important

DO NOT ENABLE unless the authentication proxy is configured to block KINETICA_ROLES HTTP headers passed in from clients.

auto_revoke_external_roles Comma-separated list of roles to revoke from externally-authenticated users prior to granting roles passed in via the KINETICA_ROLES HTTP header, or * to revoke all roles. Preceding a role name with an ! overrides the revocation (e.g. *,!foo revokes all roles except foo). Leave blank to disable. If either enable_external_authentication or auto_grant_external_roles is false, this setting has no effect.


Auditing

This section controls the request auditor, which will audit all requests received by the server in full or in part based on the settings below. The output location of the audited requests is controlled via settings in the Auditing section of gpudb_logger.conf.

Configuration ParameterDefault ValueDescription
enable_auditfalseControls whether request auditing is enabled. If set to true, the following information is audited for every request: Job ID, URI, User, and Client Address. The settings below control whether additional information about each request is also audited. If set to false, all auditing is disabled.
audit_headersfalseControls whether HTTP headers are audited for each request. If enable_audit is false this setting has no effect.
audit_bodyfalse

Controls whether the body of each request is audited (in JSON format). If enable_audit is false this setting has no effect.

Note

For requests that insert data records, this setting does not control the auditing of the records being inserted, only the rest of the request body; see audit_data below to control this.

audit_datafalse

Controls whether records being inserted are audited (in JSON format) for requests that insert data records. If either enable_audit or audit_body is false, this setting has no effect.

Note

Enabling this setting during bulk ingestion of data will rapidly produce very large audit logs and may cause disk space exhaustion; use with caution.

lock_auditfalseControls whether the above audit settings can be altered at runtime via the /alter/system/properties endpoint. In a secure environment where auditing is required at all times, this should be set to true to lock the settings to what is set in this file.


Licensing

Configuration ParameterDefault ValueDescription
license_key The license key to authorize running.


Processes and Threads

Configuration ParameterDefault ValueDescription
min_http_threads8Set min number of web server threads to spawn. (default: 8)
max_http_threads512Set max number of web server threads to spawn. (default: 512)
sm_omp_threads2Set the number of parallel jobs to create for multi-child set calculations. Use -1 to use the max number of threads (not recommended).
kernel_omp_threads4Set the number of parallel calculation threads to use for data processing. Use -1 to use the max number of threads (not recommended).
init_tables_num_threads_per_tom8Set the maximum number of threads per tom for table initialization on gpudb startup
max_tbb_threads_per_rank-1Set the maximum number of threads (both workers and masters) to be passed to TBB on initialization. Generally speaking, max_tbb_threads_per_rank - 1 TBB workers will be created. Use -1 for no limit.
toms_per_rank1Set the number of TOMs (data container shards) per rank
tps_per_tom4Set the number of TaskProcessors per TOM, CPU data processors.
tcs_per_tom16Set the number of TaskCalculators per TOM, GPU data processors.
subtask_concurrency_limit4Maximum number of simultaneous threads allocated to a given request, on each rank. Note that thread allocation may also be limted by resource group limits and/or system load.


Hardware

Configuration ParameterDefault ValueDescription
rank0.gpu0

Specify the GPU to use for all calculations on the HTTP server node, rank0.

Note

The rank0 GPU may be shared with another rank.

rank1.taskcalc_gpu 

Set the GPU device for each worker rank to use. If no GPUs are specified, each rank will round-robin the available GPUs per host system. Add rank<#>.taskcalc_gpu as needed for the worker ranks, where # ranges from 1 to the highest rank # among the rank<#>.host parameters

Example setting the GPUs to use for ranks 1 and 2:

rank1.taskcalc_gpu = 0
rank2.taskcalc_gpu = 1
rank2.taskcalc_gpu 
rank0.numa_node 

Set the head HTTP rank0 numa node(s). If left empty, there will be no thread affinity or preferred memory node. The node list may be either a single node number or a range; e.g., 1-5,7,10.

If there will be many simultaneous users, specify as many nodes as possible that won't overlap the rank1 to rankN worker numa nodes that the GPUs are on.

If there will be few simultaneous users and WMS speed is important, choose the numa node the rank0.gpu is on.

rank1.base_numa_node 

Set each worker rank's preferred base numa node for CPU affinity and memory allocation. The rank<#>.base_numa_node is the node or nodes that non-data intensive threads will run in. These nodes do not have to be the same numa nodes that the GPU specified by the corresponding rank<#>.taskcalc_gpu is on for best performance, though they should be relatively near to their rank<#>.data_numa_node.

There will be no CPU thread affinity or preferred node for memory allocation if not specified or left empty.

The node list may be a single node number or a range; e.g., 1-5,7,10.

rank2.base_numa_node 
rank1.data_numa_node 

Set each worker rank's preferred data numa node for CPU affinity and memory allocation. The rank<#>.data_numa_node is the node or nodes that data intensive threads will run in and should be set to the same numa node that the GPU specified by the corresponding rank<#>.taskcalc_gpu is on for best performance.

If the rank<#>.taskcalc_gpu is specified the rank<#>.data_numa_node will be automatically set to the node the GPU is attached to, otherwise there will be no CPU thread affinity or preferred node for memory allocation if not specified or left empty.

The node list may be a single node number or a range; e.g., 1-5,7,10.

rank2.data_numa_node 


General

Configuration ParameterDefault ValueDescription
protected_setsMASTER,_MASTER,_DATASOURCETables and schemas with these names will not be deleted (comma separated).
default_ttl20Time-to-live in minutes of non-protected tables before they are automatically deleted from the database.
disable_clear_allfalseDisallow the /clear/table request to clear all tables.
pinned_memory_pool_size250000000Size in bytes of the pinned memory pool per-rank process to speed up copying data to the GPU. Set to 0 to disable.
concurrent_kernel_executionfalseEnable (if true) multiple kernels to run concurrently on the same GPU
max_concurrent_kernels4Maximum number of kernels that can be running at the same time on a given GPU. Set to 0 for no limit. Only takes effect if concurrent_kernel_execution is true
force_host_filter_executionfalseIf true then all filter execution will be host-only (i.e. CPU). This can be useful for high-concurrency situations and when PCIe bandwidth is a limiting factor.
max_get_records_size20000Maximum number of records that data retrieval requests such as /get/records and /aggregate/groupby will return per request.
request_timeout20Timeout (in minutes) for filter-type requests
on_startup_script 

Set an optional executable command that will be run once when Kinetica is ready for client requests. This can be used to perform any initialization logic that needs to be run before clients connect. It will be run as the gpudb user, so you must ensure that any required permissions are set on the file to allow it to be executed. If the command cannot be executed or returns a non-zero error code, then Kinetica will be stopped. Output from the startup script will be logged to /opt/gpudb/core/logs/gpudb-on-start.log (and its dated relatives). The gpudb_env.sh script is run directly before the command, so the path will be set to include the supplied Python runtime.

Example:

on_startup_script = /bin/ks arg1 arg2 ...
enable_predicate_equi_jointrueEnable predicate-equi-join filter plan type
enable_overlapped_equi_jointrueEnable overlapped-equi-join filters
timeout_startup_subsystem60Timeout (in seconds) to wait for each database subsystem to startup. Subsystems include the Query Planner, Graph, Stats, & HTTP servers, as well as external text-search ranks.
timeout_shutdown_subsystem20Timeout (in seconds) to wait for each database subsystem to exit gracefully before it is force-killed.
cluster_event_timeout_startup_rank300Timeout (in seconds) to wait for a rank to start during a cluster event (ex: failover) event is considered failed.
timeout_shutdown_rank300Timeout (in seconds) to wait for a rank to exit gracefully before it is force-killed. Machines with slow disk drives may require longer times and data may be lost if a drive is not responsive.


Visualization

Several of these options interact when determining the system resources required for visualization. Worker ranks use F(N+1) bytes of GPU memory (VRAM) and F bytes of main memory (RAM) where:

F = max_heatmap_size * max_heatmap_size * 6 bytes
N = opengl_antialiasing_level

For example, when max_heatmap_size is 3072, and opengl_antialasing_level is 0, 56.6 MB of VRAM are required. When opengl_antialasing_level is 4, 283 MB are required.

Configuration ParameterDefault ValueDescription
point_render_threshold100000Threshold number of points (per-TOM) at which point rendering switches to fast mode.
symbology_render_threshold10000Threshold for the number of points (per-TOM) after which symbology rendering falls back to regular rendering
max_heatmap_size3072Maximum heatmap size (in pixels) that can be generated. This reserves max_heatmap_size 2 * 8 bytes of GPU memory at rank0
symbol_resolution100The image width/height (in pixels) of svg symbols cached in the OpenGL symbol cache.
symbol_texture_size4000The width/height (in pixels) of an OpenGL texture which caches symbol images for OpenGL rendering.
enable_opengl_renderertrueIf true, enable hardware-accelerated OpenGL renderer; if false, use the software-based Cairo renderer.
opengl_antialiasing_level0

The number of samples to use for antialiasing. Higher numbers will improve image quality but require more GPU memory to store the samples on worker ranks. This affects only the OpenGL renderer.

Value may be 0, 4, 8 or 16. When 0 antialiasing is disabled. The default value is 0.

rendering_precision_threshold30Single-precision coordinates are used for usual rendering processes, but depending on the precision of geometry data and use case, double precision processing may be required at a high zoom level. Double precision rendering processes are used from the zoom level specified by this parameter, which is corresponding to a zoom level of TMS or Google map service.
enable_lod_renderingtrueEnable level-of-details rendering for fast interaction with large WKT polygon data. Only available for the OpenGL renderer (when enable_opengl_renderer is true).
enable_vectortile_servicefalseIf true, enable Vector Tile Service (VTS) to support client-side visualization of geospatial data. Enabling this option increases memory usage on ingestion.
min_vectortile_zoomlevel1Input geometries are pre-processed upon ingestion for faster vector tile generation. This parameter determines the zoom level from which the vector tile pre-processing starts. A vector tile request for a lower zoom level than this parameter takes additional time because the vector tile needs to be generated on the fly.
max_vectortile_zoomlevel8Input geometries are pre-processed upon ingestion for faster vector tile generation. This parameter determines the zoom level at which the vector tile pre-processing stops. A vector tile request for a higher zoom level than this parameter takes additional time because the vector tile needs to be generated on the fly.
vectortile_map_tilergoogleThe name of map tiler used for Vector Tile Service. google and tms map tilers are supported currently. This parameter should be matched with the map tiler of clients' vector tile renderer.
lod_data_extent-180 -90 180 90

Longitude and latitude ranges of geospatial data for which level-of-details representations are being generated. The parameter order is:

<min_longitude> <min_latitude> <max_longitude> <max_latitude>

The default values span over the world, but the level-of-details rendering becomes more efficient when the precise extent of geospatial data is specified.

lod_subregion_num12 6The number of subregions in horizontal and vertical geospatial data extent. The default values of 12 6 divide the world into subregions of 30 degree (lon.) x 30 degree (lat.)
lod_subregion_resolution512 512A base image resolution (width and height in pixels) at which a subregion would be rendered in a global view spanning over the whole dataset. Based on this resolution level-of-details representations are generated for the polygons located in the subregion.
max_lod_level8The maximum number of levels in the level-of-details rendering. As the number increases, level-of-details rendering becomes effective at higher zoom levels, but it may increase memory usage for storing level-of-details representations.
lod_preprocessing_level5

The extent to which shape data are pre-processed for level-of-details rendering during data insert/load or processed on-the-fly in rendering time. This is a trade off between speed and memory. The higher the value, the faster level-of-details rendering is, but the more memory is used for storing processed shape data.

The maximum level is 10 (most shape data are pre-processed) and the minimum level is 0.


Tomcat

Configuration ParameterDefault ValueDescription
enable_tomcattrueEnable Tomcat/GAdmin


Persistence

Configuration ParameterDefault ValueDescription
persist_directory/opt/gpudb/persistSpecify a base directory to store persistence data files.
sms_directory${gaia.persist_directory}Base directory to store hashed strings.
text_index_directory${gaia.persist_directory}Base directory to store the text search index.
temp_directory/tmpDirectory for Kinetica to use to store temporary files. Must be a fully qualified path, have at least 100Mb of free space, and execute permission.
persist_syncfalseWhether or not to use synchronous persistence file writing. If false, files will be written asynchronously.
indexdb_flush_immediatefalseFlush the row store in persist after every update.
metadata_flush_immediatefalseFlush metadata files in persist after every update.
fsync_on_intervalfalseFsync the row store in persist whenever files are asynchronously flushed to disk. See persist_sync_time for setting the interval.
fsync_indexdb_immediatefalseFsync the row store in persist after every update to ensure it is fully written.
fsync_metadata_immediatefalseFsync the metadata files in persist after every update to ensure it is fully written.
fsync_metadata_rank0_immediatetrue
fsync_inodes_immediatefalseFsync directories in persist after every update to ensure filesystem toc is up to date.
synchronous_compressionfalseCompress data in memory immediately (or in the background) when compression is applied to an existing table's column(s) via /alter/table
persist_sync_time5The maximum time in seconds a secondary column store persist data file can be out of sync with memory. Set to a very high number to disable forced syncing.
load_vectors_on_startalways

Startup data-loading scheme:

  • always : load as much of the stored data as possible into memory before accepting requests
  • lazy : load the necessary data to start, and load as much of the remainder of the stored data as possible into memory lazily
  • on_demand : only load data as requests use it
build_pk_index_on_startalways
build_materialized_views_on_startalways
indexdb_toc_size1000000Table of contents size for IndexedDb object file store
indexdb_max_open_files128Maximum number of open files for IndexedDb object file store
indexdb_tier_by_file_lengthfalseDisable detection of sparse file support and use the full file length which may be an over-estimate of the actual usage in the persist tier.
sms_max_open_files128Maximum number of open files (per-TOM) for the SMS (string) store
chunk_size8000000Number of records per chunk (0 disables chunking)
execution_modedevice

Determines whether to execute kernels on host (CPU) or device (GPU). Possible values are:

  • default : engine decides
  • host : execute only host
  • device : execute only device
  • <rows> : execute on the host if chunked column contains the given number of rows or fewer; otherwise, execute on device.
shadow_cube_enabledfalseWhether or not to enable chunk caching
shadow_agg_size100000000The maximum number of bytes in the shadow aggregate cache
shadow_filter_size100000000The maximum number of bytes in the shadow filter cache


Statistics

Configuration ParameterDefault ValueDescription
enable_stats_servertrueRun a statistics server to collect information about Kinetica and the machines it runs on.
stats_server_ip_address${gaia.host0.address}Statistics server IP address (run on head node) default port is 2003
stats_server_port2003
stats_server_namespacegpudbStatistics server namepace - should be a machine identifier
event_server_internaltrueUse the internal event and metric server on the head host server if true, otherwise use the Kagent services. Note that Kagent installs will automatically set this value to false.
event_server_address${gaia.host0.address}Event collector server address and port.
event_server_port9080
alertmanager_address${gaia.host0.address}Alertmanager server address and port.
alertmanager_port9089
fluentbit_address Alertmanager server address and port.
fluentbit_port5170


Procs

Configuration ParameterDefault ValueDescription
enable_procsfalseEnable procs (UDFs)
proc_directory/opt/gpudb/procsDirectory where proc files are stored at runtime. Must be a fully qualified path with execute permission. If not specified, temp_directory will be used.
proc_data_directory/opt/gpudb/procsDirectory where data transferred to and from procs is written. Must be a fully qualified path with sufficient free space for required volume of data. If not specified, temp_directory will be used.


Graph Servers

One or more graph servers can be created across available hosts.


Global Parameters

Configuration ParameterDefault ValueDescription
enable_graph_serverfalseEnable/Disable all graph operations
graph.head_port8100Port used for responses from the graph server to the database server.


Server-Specific Parameters

Any number of graph servers may be configured, starting with server0, and must be specified in consecutive order (e.g., server0, server1, server2).

Server settings are defined as follows:

graph.server<#>.<parameter>
Configuration ParameterDefault ValueDescription
graph.server0.hosthost0

Valid parameter names include:

  • host : Host on which the graph server process will be run.
  • port : Ports used for requests from the database server to the graph server(s). Either all ports (one per graph server) should be specified or a single starting value that will be incremented for each subsequent graph.
  • gpus : List of GPU devices to be used by a graph server. Each server would ideally have one or more dedicated GPU. Default is empty, which allows usage of all available GPUs. Setting this to -1 disables GPU usage.
  • ram_limit : Maximum RAM memory (bytes) a server can use at any given time Default is -1 which disables CPU memory restriction.
  • vram_limit : Maximum VRAM memory (bytes) a server can use at any given time Default is -1 which disables GPU memory restriction.

Example of 2 graph servers with RAM restrictions running on CPU only:

graph.head_port = 8099
graph.server0.host = host2
graph.server1.host = host4
graph.server0.port = 8100
graph.server1.port = 8101
graph.server0.gpus = -1
graph.server1.gpus = -1
graph.server0.ram_limit = 250000000
graph.server1.ram_limit = 500000000
graph.server0.port8101
graph.server0.gpus 
graph.server0.ram_limit-1
graph.server0.vram_limit-1


KiFS

Configuration ParameterDefault ValueDescription
enable_kifsfalse

Enable access to the KiFS file system from within procs (UDFs). This will create a filesystem schema and an internal kifs user in Kinetica and mount the file system at the mount point specified below. By default, the file system is accessible to the gpudb_proc user; to make it accessible to other users, you must ensure that user_allow_other is enabled in /etc/fuse.conf and add the other users to the gpudb_proc group.

For instance:

sudo usermod -a -G gpudb_proc <user>
kifs_mount_point/opt/gpudb/kifsParent directory of the mount point for the KiFS file system. Must be a fully qualified path. The actual mount point will be a subdirectory mount below this directory. Note that this folder must have read, write and execute permissions for the gpudb user and the gpudb_proc group, and it cannot be a path on an NFS.


Etcd

Configuration ParameterDefault ValueDescription
etcd_urls List of accessible etcd server URLs.
etcd_auth_user Encrypted login credential for Etcd at given URLs.


HA

Enable/Disable HA from here. All other parameters will be in etcd at the address specified in the relevant section above.

Configuration ParameterDefault ValueDescription
enable_hafalseEnable HA.


Machine Learning (ML)

Configuration ParameterDefault ValueDescription
enable_mlfalseEnable the ML server.
ml_api_port9187default ML API service port number


Alerts

Configuration ParameterDefault ValueDescription
enable_alertstrueEnable the alerting system.
alert_exe Executable to run when an alert condition occurs. This executable will only be run on rank0 and does not need to be present on other nodes.
alert_host_statustrueTrigger an alert whenever the status of a host or rank changes.
alert_host_status_filterfatal_init_errorOptionally, filter host alerts for a comma-delimited list of statuses. If a filter is empty, every host status change will trigger an alert.
alert_rank_statustrueTrigger an alert whenever the status of a rank changes.
alert_rank_status_filterfatal_init_error, not_responding, terminatedOptionally, filter rank alerts for a comma-delimited list of statuses. If a filter is empty, every rank status change will trigger an alert.
alert_rank_cuda_errortrueTrigger an alert if a CUDA error occurs on a rank.
alert_rank_fallback_allocatortrue

Trigger alerts when the fallback allocator is employed; e.g., host memory is allocated because GPU allocation fails.

Note

To prevent a flooding of alerts, if a fallback allocator is triggered in bursts, not every use will generate an alert.

alert_error_messagestrueTrigger generic error message alerts, in cases of various significant runtime errors.
alert_memory_percentage20, 10, 5, 1Trigger an alert if available memory on any given node falls to or below a certain threshold, either absolute (number of bytes) or percentage of total memory. For multiple thresholds, use a comma-delimited list of values.
alert_memory_absolute 
alert_disk_percentage20, 10, 5, 1Trigger an alert if available disk space on any given node falls to or below a certain threshold, either absolute (number of bytes) or percentage of total disk space. For multiple thresholds, use a comma-delimited list of values.
alert_disk_absolute 
alert_max_stored_alerts100The maximum number of triggered alerts guaranteed to be stored at any given time. When this number of alerts is exceeded, older alerts may be discarded to stay within the limit.
trace_directory/tmpDirectory where the trace event and summary files are stored. Must be a fully qualified path with sufficient free space for required volume of data.
trace_event_buffer_size1000000The maximum number of trace events to be collected


Failover (NPlus1)

Configuration ParameterDefault ValueDescription
np1.enable_head_failoverfalse

Whether or not the system should fail over failed processes on the head/worker nodes to other nodes

  • true : Initiate failover to another node if a process fails
  • false : Do not initiate failover to another node (restart on the same node)
np1.enable_worker_failoverfalse
np1.rank_restart_attempts1Controls the failure threshold for processes before failover is attempted. A process must exceed its restart attempts limit within the restart interval. Number of successive times a process will be restarted
np1.critical_restart_attempts1
np1.non_critical_restart_attempts3
np1.restart_interval60Time interval in seconds since last restart after which restart count will be reset
np1.storage_api_script An api script is required for persistent storage which supports dynamic mounting, such as cloud-based storage. This script is responsible for issuing the underlying api calls directly based on requests from the database. If this field is blank it implies all hosts will have access to persist for all ranks at all times.
np1.enable_concurrent_mount_opstrueEnable concurrent mounting operations. By default, multiple attach or detach disk volume operations are permitted to run concurrently. Set to false to limit to running a single volume attach/detach operation on a host at a time.
np1.load_vectors_on_migrationalways

Post-migration startup data-loading scheme:

  • always : load as much of the stored data as possible into memory before accepting requests
  • lazy : load the necessary data to start, and load as much of the remainder of the stored data as possible into memory lazily
  • on_demand : only load data as requests use it
np1.build_pk_index_on_migrationalways
np1.build_materialized_views_on_migrationalways
heartbeat_interval20Heartbeats are used to detect host-level failures. The frequency in seconds that heartbeat requests are broadcasted to hosts
heartbeat_timeout10The allowable window in seconds for heartbeat responses (must be less than heartbeat_interval)
heartbeat_missed_limit3The number of allowable consecutive missed heartbeats before the host is considered degraded and any failover events begin


SQL Engine

Configuration ParameterDefault ValueDescription
sql.enable_plannertrueEnable Query Planner
sql.planner.addressipc://${gaia.temp_directory}/gpudb-query-engine-0

The network URI for the query planner to start. The URI can be either TCP or IPC. TCP address is used to indicate the remote query planner which may run at other hosts. The IPC address is for a local query planner.

Example for remote or TCP servers:

sql.planner.address  = tcp://127.0.0.1:9293
sql.planner.address  = tcp://HOST_IP:9293

Example for local IPC servers:

sql.planner.address  = ipc:///tmp/gpudb-query-engine-0
sql.planner.remote_debug_port0

Remote debugger port used for the query planner. Setting the port to 0 disables remote debugging.

Note

Recommended port to use is 5005

sql.planner.max_memory4096The maximum memory for the query planner to use in Megabytes.
sql.planner.max_stack6The maximum stack size for the query planner threads to use in Megabytes.
sql.planner.timeout120Query planner timeout in seconds
sql.planner.workers16Max Query planner threads
sql.results.cachingtrueEnable query results caching
sql.results.cache_ttl60TTL of the query cache results table
sql.force_binary_joinsfalsePerform joins between only 2 tables at a time; default is all tables involved in the operation at once
sql.force_binary_set_opsfalsePerform unions/intersections/exceptions between only 2 tables at a time; default is all tables involved in the operation at once
sql.plan_cache_size4000The maximum number of entries in the SQL plan cache. The default is 4000 entries, but the configurable range is 1 - 1000000. Plan caching will be disabled if the value is set outside of that range.
sql.rule_based_optimizationtrueEnable rule-based query rewrites
sql.cost_based_optimizationfalseEnable the cost-based optimizer
sql.distributed_joinstrueEnable distributed joins
sql.distributed_operationstrueEnable distributed operations
sql.parallel_executiontrueEnable parallel query evaluation
sql.max_parallel_steps4Max parallel steps
sql.paging_table_ttl20TTL of the paging results table
sql.max_view_nesting_levels16Max allowed view nesting levels. Valid range(1-64)


External Files

Configuration ParameterDefault ValueDescription
external_files_directory${gaia.persist_directory}Defines the directory from which external files can be loaded
external_file_reader_num_tasks4Maximum number of simultaneous threads allocated to a given external file read request, on each rank. Note that thread allocation may also be limited by resource group limits, the subtask_concurrency_limit setting, or system load.


Tiered Storage

Defines system resources using a set of containers (tiers) in which data can be stored, either on a temporary (memory) or permanent (disk) basis. Tiers are defined in terms of their maximum capacity and water mark thresholds. The general format for defining tiers is:

tier.<tier_type>.<config_level>.<parameter>

where tier_type is one of the five basic types of tiers:

  • vram : GPU memory
  • ram : Main memory
  • disk : Disk cache
  • persist : Permanent storage
  • cold : Extended long-term storage

Each tier can be configured on a global or per-rank basis, indicated by the config_level:

  • default : global, applies to all ranks
  • rank<#> : local, applies only to the specified rank, overriding any global default

If a field is not specified at the rank<#> level, the specified default value applies. If neither is specified, the global system defaults will take effect, which vary by tier.

The parameters are also tier-specific and will be listed in their respective sections, though every tier, except Cold Storage, will have the following:

  • limit : [ -1, 1 .. N ] (bytes)
  • high_watermark : [ 1 .. 100 ] (percent)
  • low_watermark : [ 1 .. 100 ] (percent)

Note

To disable watermark-based eviction, set the low_watermark and high_watermark values to 100. Watermark-based eviction is also ignored if the tier limit is set to -1 (no limit).


Global Tier Parameters

Configuration ParameterDefault ValueDescription
tier.global.concurrent_wait_timeout120Timeout in seconds for subsequent requests to wait on a locked resource


VRAM Tier

The VRAM Tier is composed of the memory available in one or multiple GPUs per host machine.

A default memory limit and eviction thresholds can be set for CUDA-enabled devices across all ranks, while one or more ranks may be configured to override those defaults.

The general format for VRAM settings:

tier.vram.[default|rank<#>].all_gpus.<parameter>
Configuration ParameterDefault ValueDescription
tier.vram.default.all_gpus.limit-1

Valid parameter names include:

  • limit : The maximum VRAM (bytes) per rank that can be allocated on GPU(s) across all resource groups. Default is -1, signifying to reserve 95% of the available GPU memory at startup.
  • high_watermark : VRAM percentage used eviction threshold. Once memory usage exceeds this value, evictions from this tier will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% memory usage threshold.
  • low_watermark : VRAM percentage used recovery threshold. Once memory usage exceeds the high_watermark, evictions will continue until memory usage falls below this recovery threshold. Default is 50, signifying a 50% memory usage threshold.
tier.vram.default.all_gpus.high_watermark90
tier.vram.default.all_gpus.low_watermark50


RAM Tier

The RAM Tier represents the RAM available for data storage per rank.

The RAM Tier is NOT used for small, non-data objects or variables that are allocated and deallocated for program flow control or used to store metadata or other similar information; these continue to use either the stack or the regular runtime memory allocator. This tier should be sized on each machine such that there is sufficient RAM left over to handle this overhead, as well as the needs of other processes running on the same machine.

A default memory limit and eviction thresholds can be set across all ranks, while one or more ranks may be configured to override those defaults.

The general format for RAM settings:

tier.ram.[default|rank<#>].<parameter>
Configuration ParameterDefault ValueDescription
tier.ram.default.limit-1

Valid parameter names include:

  • limit : The maximum RAM (bytes) per rank that can be allocated across all resource groups. Default is -1, signifying no limit and ignore watermark settings.
  • high_watermark : RAM percentage used eviction threshold. Once memory usage exceeds this value, evictions from this tier will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% memory usage threshold.
  • low_watermark : RAM percentage used recovery threshold. Once memory usage exceeds the high_watermark, evictions will continue until memory usage falls below this recovery threshold. Default is 50, signifying a 50% memory usage threshold.
tier.ram.default.high_watermark90
tier.ram.default.low_watermark50
tier.ram.rank0.limit-1The maximum RAM (bytes) for processing data at rank 0. Overrides the overall default RAM tier limit.


Disk Tier

Disk Tiers are used as temporary swap space for data that doesn't fit in RAM or VRAM. The disk should be as fast or faster than the Persist Tier storage since this tier is used as an intermediary cache between the RAM and Persist Tiers. Multiple Disk Tiers can be defined on different disks with different capacities and performance parameters. No Disk Tiers are required, but they can improve performance when the RAM Tier is at capacity.

A default storage limit and eviction thresholds can be set across all ranks for a given Disk Tier, while one or more ranks within a Disk Tier may be configured to override those defaults.

The general format for Disk settings:

tier.disk<#>.[default|rank<#>].<parameter>

Multiple Disk Tiers may be defined such as disk, disk0, disk1, ... etc. to support different tiering strategies that use any one of the Disk Tiers. A tier strategy can have, at most, one Disk Tier. Create multiple tier strategies to use more than one Disk Tier, one per strategy. See tier_strategy parameter for usage.

Configuration ParameterDefault ValueDescription
tier.disk0.default.path/opt/gpudb/diskcache

Valid parameter names include:

  • path : A base directory to use as a swap space for this tier.
  • limit : The maximum disk usage (bytes) per rank for this tier across all resource groups. Default is -1, signifying no limit and ignore watermark settings.
  • high_watermark : Disk percentage used eviction threshold. Once disk usage exceeds this value, evictions from this tier will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% disk usage threshold.
  • low_watermark : Disk percentage used recovery threshold. Once disk usage exceeds the high_watermark, evictions will continue until disk usage falls below this recovery threshold. Default is 50, signifying a 50% disk usage threshold.
  • store_persistent_objects : If true, allow the disk cache to store copies of data even if they are already stored in a persistent tier (persist/cold).
tier.disk0.default.limit-1
tier.disk0.default.high_watermark90
tier.disk0.default.low_watermark50
tier.disk0.default.store_persistent_objectsfalse


Persist Tier

The Persist Tier is a single pseudo-tier that contains data in persistent form that survives between restarts. Although it also is a disk-based tier, its behavior is different from Disk Tiers: data for persistent objects is always present in the Persist Tier (or Cold Storage Tier, if configured), but may not be up-to-date at any given time.

A default storage limit and eviction thresholds can be set across all ranks, while one or more ranks may be configured to override those defaults. The Graph Solver engine may be given its own storage settings; however, it is not tiered, and therefore cannot have limit/watermark settings applied.

The general format for Persist settings:

tier.persist.[default|rank<#>|text<#>|graph<#>].<parameter>

Warning

In general, limits on the Persist Tier should only be set if one or more Cold Storage Tiers are configured. Without a supporting Cold Storage Tier to evict objects in the Persist Tier to, operations requiring space in the Persist Tier will fail when the limit is reached.

Configuration ParameterDefault ValueDescription
tier.persist.default.path${gaia.persist_directory}

Valid parameter names include:

  • path : Base directory to store column and object vectors.
  • storage : The storage volume corresponding to the persist tier, for managed storage volumes. Must be the vol<#> for a configured storage volume. Do not specify a default as each rank and graph server should have their own. storage volumes (unlisted, as there is no default value).
  • limit : The maximum disk usage (bytes) per rank for this tier across all resource groups. Default is -1, signifying no limit and ignore watermark settings.
  • high_watermark : Disk percentage used eviction threshold. Once disk usage exceeds this value, evictions from this tier to cold storage (if configured) will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% disk usage threshold.
  • low_watermark : Disk percentage used recovery threshold. Once disk usage exceeds the high_watermark, evictions will continue until disk usage falls below this recovery threshold. Default is 50, signifying a 50% disk usage threshold.

Note

path and storage are the only applicable parameters for text and graph

Example showing a rank0 configuration:

tier.persist.rank0.path = /opt/data_rank0
tier.persist.rank0.storage = vol0
tier.persist.rank0.limit = -1
tier.persist.rank0.high_watermark = 90
tier.persist.rank0.low_watermark = 50
tier.persist.default.limit-1
tier.persist.default.high_watermark90
tier.persist.default.low_watermark50


Cold Storage Tier

Cold Storage Tiers can be used to extend the storage capacity of the Persist Tier. Assign a tier strategy with cold storage to objects that will be infrequently accessed since they will be moved as needed from the Persist Tier. The Cold Storage Tier is typically a much larger capacity physical disk or a cloud-based storage system which may not be as performant as the Persist Tier storage.

A default storage limit and eviction thresholds can be set across all ranks for a given Cold Storage Tier, while one or more ranks within a Cold Storage Tier may be configured to override those defaults.

Note

If an object needs to be pulled out of cold storage during a query, it may need to use the local persist directory as a temporary swap space. This may trigger an eviction of other persisted items to cold storage due to low disk space condition defined by the watermark settings for the Persist Tier.

The general format for Cold Storage settings:

tier.cold<#>.[default|rank<#>].<parameter>

Multiple Cold Storage Tiers may be defined such as cold, cold0, cold1, ... etc. to support different tiering strategies that use any one of the Cold Storage Tiers. A tier strategy can have, at most, one Cold Storage Tier. Create multiple tier strategies to use more than one Cold Storage Tier, one per strategy. See tier_strategy parameter for usage.

Configuration ParameterDefault ValueDescription
tier.cold0.default.typedisk

Valid parameter names include:

  • type : The storage provider type. Currently supports disk (local/network storage), hdfs (Hadoop distributed filesystem), azure (Azure blob storage) and s3 (Amazon S3 bucket)
  • base_path : A base path based on the provider type for this tier.
  • wait_timeout : Timeout in seconds for reading from or writing to this storage provider.
  • connection_timeout : Timeout in seconds for connecting to this storage provider.

HDFS-specific parameter names:

  • hdfs_uri : The host IP address & port for the hadoop distributed file system. For example: hdfs://localhost:8020
  • hdfs_principal : The effective principal name to use when connecting to the hadoop cluster.
  • hdfs_use_kerberos : Set to true to enable Kerberos authentication to an HDFS storage server. The credentials of the principal are in the file specified by the hdfs_kerberos_keytab parameter. Note that Kerberos's kinit command will be run when the database is started.
  • hdfs_kerberos_keytab : The Kerberos keytab file used to authenticate the gpudb Kerberos principal.

Amazon S3-specific parameter names:

  • s3_bucket_name
  • s3_region (optional)
  • s3_endpoint (optional)
  • s3_aws_access_key_id (optional)
  • s3_aws_secret_access_key (optional)

Note

If not supplying the s3_aws_access_key_id and/or s3_aws_secret_access_key setting values via this configuration file, the values should instead be provided via the AWS CLI or via the respective 'AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

Microsoft Azure-specific parameter names:

  • azure_container_name
  • azure_storage_account_name
  • azure_storage_account_key
  • azure_sas_token
tier.cold0.default.base_path/mnt/gpudb/cold_storage
tier.cold0.default.wait_timeout1
tier.cold0.default.connection_timeout1


Tier Strategy

Configuration ParameterDefault ValueDescription
tier_strategy.defaultVRAM 1, RAM 5, PERSIST 5

Default strategy to apply to tables or columns when one was not provided during table creation. This strategy is also applied to a resource group that does not specify one at time of creation.

The strategy is formed by chaining together the tier types and their respective eviction priorities. Any given tier may appear no more than once in the chain and the priority must be in range 1 - 10, where 1 is the lowest priority (first to be evicted) and 9 is the highest priority (last to be evicted). A priority of 10 indicates that an object is unevictable.

Each tier's priority is in relation to the priority of other objects in the same tier; e.g., RAM 9, DISK2 1 indicates that an object will be the highest evictable priority among objects in the RAM Tier (last evicted), but that it will be the lowest priority among objects in the Disk Tier named disk2 (first evicted). Note that since an object can only have one Disk Tier instance in its strategy, the corresponding priority will only apply in relation to other objects in Disk Tier instance disk2.

See the Tiered Storage section for more information about tier type names.

Format:

<tier1> <priority>, <tier2> <priority>, <tier3> <priority>, ...

Examples using a Disk Tier named disk2 and a Cold Storage Tier cold0:

vram 3, ram 5, disk2 3, persist 10
vram 3, ram 5, disk2 3, persist 6, cold0 10
tier_strategy.predicate_evaluation_interval60Predicate evaluation interval (in minutes) - indicates the interval at which the tier strategy predicates are evaluated


Default Resource Group

Resource groups are used to enforce simultaneous memory, disk and thread usage limits for all users within a given group. Users not assigned to a specific resource group will be placed within this default group.

Configuration ParameterDefault ValueDescription
resource_group.default.schedule_priority50The scheduling priority for this group's operations, 1 - 100, where 1 is the lowest priority and 100 is the highest
resource_group.default.max_tier_priority10The maximum eviction priority for tiered objects, 1 - 10. This supercedes any priorities that are set by any user provided tiering strategies.
resource_group.default.max_cpu_concurrency-1Maximum number of concurrent operations; minimum is 4; -1 for no limit
resource_group.default.vram_limit-1The maximum memory (bytes) this group can use at any given time in the VRAM tier; -1 for no limit
resource_group.default.ram_limit-1The maximum memory (bytes) this group can use at any given time in the RAM tier; -1 for no limit


Storage Volumes

When persisted rank data is stored on attached external storage volumes, the following config entries are used to automatically attach and mount the volume upon migration of a rank to another host. Attaching and mounting is performed by the script specified by np1.storage_api_script.

Configuration ParameterDefault ValueDescription
  

The general format for storage volumes:

storage.volumes.vol<#>.<parameter>

Volumes are numbered starting at zero and match the number of ranks plus additional volumes to match graph servers.

Valid parameter names include:

  • fs_uuid : The UUID identifier for the volume on the cloud provider.
  • id : The cloud identifier of the volume.
  • mount_point : The local path to mount the cloud volume. The folder must exist and be owned by gpudb.

Example:

storage.volumes.vol0.fs_uuid = my_vol_uuid
storage.volumes.vol0.id = /subscriptions/my_az_subscription_uuid/resourceGroups/my_az_rg/providers/Microsoft.Compute/
storage.volumes.vol0.mount_point = /opt/data_rank0