Configuration Reference

Kinetica configuration consists of several user-modifiable system parameters present in /opt/gpudb/core/etc/gpudb.conf that are used to set up and tune the system.


Version

Configuration ParameterDefault ValueDescription
file_version7.2.2.3.20241024132445The current version of this configuration. This param is automatically updated by Kinetica and should not be edited manually.


Identification

Configuration ParameterDefault ValueDescription
ring_namering0Name of the ring containing the high availability clusters.
cluster_namecluster0Name of the cluster containing all the nodes of the database.


Hosts

Specify each unique host in the cluster. Each host runs a local instance of HostManager, which manages database services and components that are part of the Kinetica install.

Host settings are defined as follows:

host<#>.<parameter>

Any number of hosts may be configured, starting with host0, and must be specified in consecutive order (e.g., host0, host1, host2).

Note

For public_address & host_manager_public_url, if either parameter is to be specified, that parameter must be specified for all hosts.

Configuration ParameterDefault ValueDescription
host0.address127.0.0.1

Host parameters include the following:

  • address : The unique address for this host. Single host clusters may use 127.0.0.1. This field is required.
  • public_address : An optional public address that clients should use when performing multi-head operations.
  • ha_address : An optional address to allow inter-cluster communication with HA when address is not routable between clusters.
  • host_manager_public_url: An optional public URL that can be used to access HostManager endpoints. See host_manager_http_port.
  • ram_limit : An optional upper limit for RAM (in bytes) on this host, capping the amount of memory that all Kinetica processes are allowed to consume. Use -1 for no limit.
  • gpus : An optional comma-separated list of GPUs (as reported by NVML) that may be reserved by Kinetica services (e.g., ranks, graph server, ML, etc.). Leave empty to make all GPUs available for use. The same failover caveat with the RAM limit also applies.
  • accepts_failover : Whether or not this host should accept failed processes that need to be migrated off degraded hosts. Default is to not accept failed processes.
host0.public_address 
host0.ha_address 
host0.host_manager_public_url 
host0.ram_limit-1
host0.gpus 
host0.accepts_failoverfalse


Ranks

Configuration ParameterDefault ValueDescription
rank0.hosthost0Specify the host on which to run each rank worker process in the cluster. Multiple ranks may run on the same host. If a rank is specified with an empty host entry, it is effectively disabled (i.e., removed from the cluster) and no persisted data for this rank will be loaded even if present on any host. (e.g. 'rank2.host = host1' to run rank2 on host1)
rank1.hosthost0
rank2.hosthost0


Network

Configuration ParameterDefault ValueDescription
head_ip_address${gaia.host0.address}Head HTTP server IP address. Set to the publicly accessible IP address of the first process, rank0.
head_port9191Head HTTP server port to use for head_ip_address.
use_httpsfalseSet to true to use HTTPS; if true then https_key_file and https_cert_file must be provided
https_key_file 

Files containing the SSL private Key and the SSL certificate for. If required, a self signed certificate (expires after 10 years) can be generated via the command:

openssl req -newkey rsa:2048 -new -nodes -x509 \
  -days 3650 -keyout key.pem -out cert.pem
https_cert_file 
http_allow_origin*Value to return via Access-Control-Allow-Origin HTTP header (for Cross-Origin Resource Sharing). Set to empty to not return the header and disallow CORS.
http_keep_alivefalseKeep HTTP connections alive between requests
enable_httpd_proxyfalse

Start an HTTP server as a proxy to handle LDAP and/or Kerberos authentication. Each host will run an HTTP server and access to each rank is available through http://host:8082/gpudb-1, where port 8082 is defined by httpd_proxy_port.

Note

HTTP external endpoints are not affected by the use_https parameter above. If you wish to enable HTTPS, you must edit the /opt/gpudb/httpd/conf/httpd.conf and setup HTTPS as per the Apache httpd documentation at https://httpd.apache.org/docs/2.2/

httpd_proxy_port8082TCP port that the httpd auth proxy server will listen on if enable_httpd_proxy is true.
httpd_proxy_use_httpsfalseSet to true if the httpd auth proxy server is configured to use HTTPS.
trigger_port9001Trigger ZMQ publisher server port (-1 to disable), uses the head_ip_address interface.
set_monitor_port9002Set monitor ZMQ publisher server port (-1 to disable), uses the head_ip_address interface.
set_monitor_proxy_port9003

Set monitor ZMQ publisher internal proxy server port (-1 to disable), uses the head_ip_address interface.

Important

Disabling this port effectively prevents worker nodes from publishing set monitor notifications when multi-head ingest is enabled (see enable_worker_http_servers).

set_monitor_queue_size1000Set monitor queue size
enable_revealtrueEnable Reveal runtime
global_manager_port_one5552Internal communication ports
global_manager_pub_port5553Host manager synchronization port
global_manager_local_pub_port5554Local host manager port
host_manager_http_port9300HTTP port for web portal of the host manager
enable_worker_http_serverstrueEnable worker HTTP servers; each process runs its own server for multi-head ingest.
rank1.worker_http_server_port9192Optionally, specify the worker HTTP server ports. The default is to use (head_port + rank #) for each worker process where rank number is from 1 to number of ranks in rank<#>.host below.
rank2.worker_http_server_port9193
rank0.public_url 

Optionally, specify a public URL for each worker HTTP server that clients should use to connect for multi-head operations.

Note

If specified for any ranks, a public URL must be specified for all ranks. Cannot be used in conjunction with nplus1.

rank1.public_url 
rank2.public_url 
rank0.communicator_port6555Specify the TCP ports each rank will use to communicate with the others. If the port for any rank<#> is not specified the port will be assigned to rank0.communicator_port + rank #.
rank1.communicator_port6556
rank2.communicator_port6557
compress_network_datafalseEnables compression of inter-node network data transfers.


Security

Configuration ParameterDefault ValueDescription
require_authenticationtrueRequire authentication.
enable_authorizationfalseEnable authorization checks.
min_password_length0Minimum password length.
enable_external_authenticationfalse

Enable external (LDAP, Kerberos, etc.) authentication. User IDs of externally-authenticated users must be passed in via the REMOTE_USER HTTP header from the authentication proxy. May be used in conjuntion with the enable_httpd_proxy setting above for an integrated external authentication solution.

Important

DO NOT ENABLE unless external access to Kinetica ports has been blocked via firewall AND the authentication proxy is configured to block REMOTE_USER HTTP headers passed in from clients.

external_authentication_handshake_key Encrypted key that, if specified, must be passed in unencrypted via the KINETICA_HANDSHAKE_KEY HTTP header from the authentication proxy if a REMOTE_USER HTTP header is also passed in. A missing or incorrect handshake key will result in rejection of the request.
unified_security_namespacefalseUse a single namespace for internal and external user IDs and role names. If false, external user IDs must be prefixed with @ to differentiate them from internal user IDs and role names (except in the REMOTE_USER HTTP header, where the @ is omitted).
auto_create_external_usersfalseAutomatically create accounts for externally-authenticated users. If enable_external_authentication is false, this setting has no effect. Note that accounts are not automatically deleted if users are removed from the external authentication provider and will be orphaned.
auto_grant_external_rolesfalse

Automatically add roles passed in via the KINETICA_ROLES HTTP header to externally-authenticated users. Specified roles that do not exist are ignored. If enable_external_authentication is false, this setting has no effect.

Important

DO NOT ENABLE unless the authentication proxy is configured to block KINETICA_ROLES HTTP headers passed in from clients.

auto_revoke_external_roles Comma-separated list of roles to revoke from externally-authenticated users prior to granting roles passed in via the KINETICA_ROLES HTTP header, or * to revoke all roles. Preceding a role name with an ! overrides the revocation (e.g. *,!foo revokes all roles except foo). Leave blank to disable. If either enable_external_authentication or auto_grant_external_roles is false, this setting has no effect.


External Security

Configuration ParameterDefault ValueDescription
security.external.ranger.url URL of Ranger REST API. E.g., https://localhost:6080/ Leave blank for no Ranger Server
security.external.ranger.service_namekineticaName of the service created on the Ranger Server to manage this Kinetica instance
security.external.ranger.cache_minutes60Maximum minutes to hold on to data from Ranger
security.external.ranger_authorizer.addressipc://${gaia.temp_directory}/gpudb-ranger-0

The network URI for the ranger_authorizer to start. The URI can be either TCP or IPC. TCP address is used to indicate the remote ranger_authorizer which may run at other hosts. The IPC address is for a local ranger_authorizer.

Example addresses for remote or TCP servers:

tcp://127.0.0.1:9293
tcp://HOST_IP:9293

Example address for local IPC servers:

ipc:///tmp/gpudb-ranger-0
security.external.ranger_authorizer.timeout120Ranger Authorizer timeout in seconds
security.external.ranger_authorizer.remote_debug_port0

Remote debugger port used for the ranger_authorizer. Setting the port to 0 disables remote debugging.

Note

Recommended port to use is 5005


Auditing

This section controls the request auditor, which will audit all requests received by the server in full or in part based on the settings below. The output location of the audited requests is controlled via settings in the Auditing section of gpudb_logger.conf.

Configuration ParameterDefault ValueDescription
enable_auditfalseControls whether request auditing is enabled. If set to true, the following information is audited for every request: Job ID, URI, User, and Client Address. The settings below control whether additional information about each request is also audited. If set to false, all auditing is disabled.
audit_headersfalseControls whether HTTP headers are audited for each request. If enable_audit is false this setting has no effect.
audit_bodyfalse

Controls whether the body of each request is audited (in JSON format). If enable_audit is false this setting has no effect.

Note

For requests that insert data records, this setting does not control the auditing of the records being inserted, only the rest of the request body; see audit_data below to control this.

audit_datafalse

Controls whether records being inserted are audited (in JSON format) for requests that insert data records. If either enable_audit or audit_body is false, this setting has no effect.

Note

Enabling this setting during bulk ingestion of data will rapidly produce very large audit logs and may cause disk space exhaustion; use with caution.

audit_responsefalseControls whether response information is audited for each request. If enable_audit is false this setting has no effect.
lock_auditfalseControls whether the above audit settings can be altered at runtime via the /alter/system/properties endpoint. In a secure environment where auditing is required at all times, this should be set to true to lock the settings to what is set in this file.


Licensing

Configuration ParameterDefault ValueDescription
license_key The license key to authorize running.


Processes and Threads

Configuration ParameterDefault ValueDescription
min_http_threads8Set min number of web server threads to spawn. (default: 8)
max_http_threads512Set max number of web server threads to spawn. (default: 512)
init_tables_num_threads_per_tom8Set the maximum number of threads per tom for table initialization on gpudb startup
max_tbb_threads_per_rank-1Set the maximum number of threads (both workers and masters) to be passed to TBB on initialization. Generally speaking, max_tbb_threads_per_rank - 1 TBB workers will be created. Use -1 for no limit.
toms_per_rank1Set the number of TOMs (data container shards) per rank
tps_per_tom-1Size of the worker rank data thread pool. This includes operations such as inserts, updates, and deletes on table data. NB multi-head inserts are not affected by this limit.
tcs_per_tom-1Size of the worker rank processing thread pool. This is primarily used for computation-based operations such as aggregates and record retrieval.
subtask_concurrency_limit-1Specifies the number of concurrent queries to prioritize at once in the task scheduler. Used to control depth-first query execution (execute the current query as quickly as possible) versus breadth-first (execute as many queries as possible). This is not a hard limit. If the total number of available queries exceeds this value and there are idle threads, the other, i.e. non-prioritized, queries will be executed.


Hardware

Configuration ParameterDefault ValueDescription
rank0.gpu0

Specify the GPU to use for all calculations on the HTTP server node, rank0.

Note

The rank0 GPU may be shared with another rank.

rank1.taskcalc_gpu 

Set the GPU device for each worker rank to use. If no GPUs are specified, each rank will round-robin the available GPUs per host system. Add rank<#>.taskcalc_gpu as needed for the worker ranks, where # ranges from 1 to the highest rank # among the rank<#>.host parameters

Example setting the GPUs to use for ranks 1 and 2:

rank1.taskcalc_gpu = 0
rank2.taskcalc_gpu = 1
rank2.taskcalc_gpu 
rank0.numa_node 

Set the head HTTP rank0 numa node(s). If left empty, there will be no thread affinity or preferred memory node. The node list may be either a single node number or a range; e.g., 1-5,7,10.

If there will be many simultaneous users, specify as many nodes as possible that won't overlap the rank1 to rankN worker numa nodes that the GPUs are on.

If there will be few simultaneous users and WMS speed is important, choose the numa node the rank0.gpu is on.

rank1.base_numa_node 

Set each worker rank's preferred base numa node for CPU affinity and memory allocation. The rank<#>.base_numa_node is the node or nodes that non-data intensive threads will run in. These nodes do not have to be the same numa nodes that the GPU specified by the corresponding rank<#>.taskcalc_gpu is on for best performance, though they should be relatively near to their rank<#>.data_numa_node.

There will be no CPU thread affinity or preferred node for memory allocation if not specified or left empty.

The node list may be a single node number or a range; e.g., 1-5,7,10.

rank2.base_numa_node 
rank1.data_numa_node 

Set each worker rank's preferred data numa node for CPU affinity and memory allocation. The rank<#>.data_numa_node is the node or nodes that data intensive threads will run in and should be set to the same numa node that the GPU specified by the corresponding rank<#>.taskcalc_gpu is on for best performance.

If the rank<#>.taskcalc_gpu is specified the rank<#>.data_numa_node will be automatically set to the node the GPU is attached to, otherwise there will be no CPU thread affinity or preferred node for memory allocation if not specified or left empty.

The node list may be a single node number or a range; e.g., 1-5,7,10.

rank2.data_numa_node 


General

Configuration ParameterDefault ValueDescription
default_ttl20Time-to-live in minutes of non-protected tables before they are automatically deleted from the database.
disable_clear_alltrueDisallow the /clear/table request to clear all tables.
pinned_memory_pool_size250000000Size in bytes of the pinned memory pool per-rank process to speed up copying data to the GPU. Set to 0 to disable.
concurrent_kernel_executiontrueEnable (if true) multiple kernels to run concurrently on the same GPU
max_concurrent_kernels0Maximum number of kernels that can be running at the same time on a given GPU. Set to 0 for no limit. Only takes effect if concurrent_kernel_execution is true
force_host_filter_executionfalseIf true then all filter execution will be host-only (i.e. CPU). This can be useful for high-concurrency situations and when PCIe bandwidth is a limiting factor.
max_get_records_size20000Maximum number of records that data retrieval requests such as /get/records and /aggregate/groupby will return per request.
request_timeout20Timeout (in minutes) for filter-type requests
on_startup_script 

Set an optional executable command that will be run once when Kinetica is ready for client requests. This can be used to perform any initialization logic that needs to be run before clients connect. It will be run as the gpudb user, so you must ensure that any required permissions are set on the file to allow it to be executed. If the command cannot be executed or returns a non-zero error code, then Kinetica will be stopped. Output from the startup script will be logged to /opt/gpudb/core/logs/gpudb-on-start.log (and its dated relatives). The gpudb_env.sh script is run directly before the command, so the path will be set to include the supplied Python runtime.

Example:

on_startup_script = /bin/ks arg1 arg2 ...
enable_pk_equi_joinfalseEnable pk-equi-join filters
enable_predicate_equi_jointrueEnable predicate-equi-join filter plan type
enable_overlapped_equi_jointrueEnable overlapped-equi-join filters
timeout_startup_subsystem60Timeout (in seconds) to wait for each database subsystem to startup. Subsystems include the Query Planner, Graph, Stats, & HTTP servers, as well as external text-search ranks.
timeout_shutdown_subsystem20Timeout (in seconds) to wait for each database subsystem to exit gracefully before it is force-killed.
cluster_event_timeout_startup_rank300Timeout (in seconds) to wait for a rank to start during a cluster event (ex: failover) event is considered failed.
timeout_shutdown_rank300Timeout (in seconds) to wait for a rank to exit gracefully before it is force-killed. Machines with slow disk drives may require longer times and data may be lost if a drive is not responsive.


Visualization

Several of these options interact when determining the system resources required for visualization. Worker ranks use F(N+1) bytes of GPU memory (VRAM) and F bytes of main memory (RAM) where:

F = max_heatmap_size * max_heatmap_size * 6 bytes
N = opengl_antialiasing_level

For example, when max_heatmap_size is 3072, and opengl_antialasing_level is 0, 56.6 MB of VRAM are required. When opengl_antialasing_level is 4, 283 MB are required.

Configuration ParameterDefault ValueDescription
point_render_threshold100000Threshold number of points (per-TOM) at which point rendering switches to fast mode.
symbology_render_threshold10000Threshold for the number of points (per-TOM) after which symbology rendering falls back to regular rendering
max_heatmap_size3072Maximum heatmap size (in pixels) that can be generated. This reserves max_heatmap_size 2 * 8 bytes of GPU memory at rank0
symbol_resolution100The image width/height (in pixels) of svg symbols cached in the OpenGL symbol cache.
symbol_texture_size4000The width/height (in pixels) of an OpenGL texture which caches symbol images for OpenGL rendering.
enable_opengl_renderertrueIf true, enable hardware-accelerated OpenGL renderer; if false, use the software-based Cairo renderer.
opengl_antialiasing_level0

The number of samples to use for antialiasing. Higher numbers will improve image quality but require more GPU memory to store the samples on worker ranks. This affects only the OpenGL renderer.

Value may be 0, 4, 8 or 16. When 0 antialiasing is disabled. The default value is 0.

rendering_precision_threshold30Single-precision coordinates are used for usual rendering processes, but depending on the precision of geometry data and use case, double precision processing may be required at a high zoom level. Double precision rendering processes are used from the zoom level specified by this parameter, which is corresponding to a zoom level of TMS or Google map service.
enable_lod_renderingtrueEnable level-of-details rendering for fast interaction with large WKT polygon data. Only available for the OpenGL renderer (when enable_opengl_renderer is true).
enable_vectortile_servicefalseIf true, enable Vector Tile Service (VTS) to support client-side visualization of geospatial data. Enabling this option increases memory usage on ingestion.
min_vectortile_zoomlevel1Input geometries are pre-processed upon ingestion for faster vector tile generation. This parameter determines the zoom level from which the vector tile pre-processing starts. A vector tile request for a lower zoom level than this parameter takes additional time because the vector tile needs to be generated on the fly.
max_vectortile_zoomlevel4Input geometries are pre-processed upon ingestion for faster vector tile generation. This parameter determines the zoom level at which the vector tile pre-processing stops. A vector tile request for a higher zoom level than this parameter takes additional time because the vector tile needs to be generated on the fly.
vectortile_map_tilergoogleThe name of map tiler used for Vector Tile Service. google and tms map tilers are supported currently. This parameter should be matched with the map tiler of clients' vector tile renderer.
lod_data_extent-180 -90 180 90

Longitude and latitude ranges of geospatial data for which level-of-details representations are being generated. The parameter order is:

<min_longitude> <min_latitude> <max_longitude> <max_latitude>

The default values span over the world, but the level-of-details rendering becomes more efficient when the precise extent of geospatial data is specified.

lod_subregion_num12 6The number of subregions in horizontal and vertical geospatial data extent. The default values of 12 6 divide the world into subregions of 30 degree (lon.) x 30 degree (lat.)
lod_subregion_resolution512 512A base image resolution (width and height in pixels) at which a subregion would be rendered in a global view spanning over the whole dataset. Based on this resolution level-of-details representations are generated for the polygons located in the subregion.
max_lod_level8The maximum number of levels in the level-of-details rendering. As the number increases, level-of-details rendering becomes effective at higher zoom levels, but it may increase memory usage for storing level-of-details representations.
lod_preprocessing_level5

The extent to which shape data are pre-processed for level-of-details rendering during data insert/load or processed on-the-fly in rendering time. This is a trade off between speed and memory. The higher the value, the faster level-of-details rendering is, but the more memory is used for storing processed shape data.

The maximum level is 10 (most shape data are pre-processed) and the minimum level is 0.

vts.max_vertices_per_chunk20000000The maximum number of vertices within a chunk when rendering VTS tiles. If a chunk has more vertices than this limit the VTS request will return an error.
vts.max_features_per_tile200000The maximum number of features that will be returned in a VTS tile. This limit is applied at the chunk level so the actual returned VTS tile may have more features. If thera are more features within the given tile within a chunk the additional features will not be returned within the tile.


Video

Configuration ParameterDefault ValueDescription
video_default_ttl-1System default TTL for videos. Time-to-live (TTL) is the number of minutes before a video will expire and be removed, or -1 to disable.
video_max_count-1The maximum number of videos to allow on the system. Set to 0 to disable video rendering. Set to -1 to allow an unlimited number of videos.
video_temp_directory${gaia.temp_directory}/gpudb-temp-videosDirectory where video files should be temporarily stored while rendering. Only accessed by rank 0.


GAdmin

GAdmin is a web-based UI for administering the Kinetica DB and system.

Configuration ParameterDefault ValueDescription
enable_tomcattrueEnable Tomcat web-server for the GAdmin UI


Workbench

Workbench is a web-based UI for managing database objects and writing rich SQL workbooks in Kinetica.

Configuration ParameterDefault ValueDescription
enable_workbenchtrueStart the Workbench app on the head host when host manager is started.
workbench_port8000HTTP server port for Workbench if enabled.


Persistence

Configuration ParameterDefault ValueDescription
persist_directory/opt/gpudb/persistSpecify a base directory to store persistence data files.
sms_directory${gaia.persist_directory}Base directory to store hashed strings.
text_index_directory${gaia.persist_directory}Base directory to store the text search index.
temp_directory${gaia.persist_directory}/tmpDirectory for Kinetica to use to store temporary files. Must be a fully qualified path, have at least 100Mb of free space, and execute permission.
fsync_inodes_immediatefalseFsync directories in persist after every update to ensure filesystem toc is up to date.
metadata_store_pool_size128Maximum number of open metadata stores per rank.
metadata_store_sync_modenormal

Sync mode to use when persisting metadata stores to disk:

  • off : Turn off synchronous writes by default
  • normal : Use normal synchronous writes by default
  • full : Use full synchronous writes by default
wal.max_segment_size500000000Maximum size of each wal segment file. Larger sizes generally require more objects to be flushed before a given segment can be freed thus increasing the wal disk usage whereas smaller sizes increase the number of disk open/close operations.
wal.segment_count-1Approximate number of segment files to split the wal across. A minimum of two is required. The size of the wal is limited by segment_count * max_segment_size. (per rank and per tom) Set to 0 to remove a size limit on the wal itself, but still be bounded by rank tier limits. Set to -1 to have the database decide automatically per table.
wal.sync_policyflush

Sync mode to use when persisting wal entries to disk:

  • none : Disable the wal
  • background : Wal entries are periodically flushed instead of immediately after each operation
  • flush : Protects entries in the event of a database crash
  • fsync : Protects entries in the event of an OS crash
wal.flush_frequency60Given a sync_policy of background, specifies how frequently wal entries are flushed to disk
wal.checksumtrueEnable checksum protection on the wal entries
wal.truncate_corrupt_tables_on_starttrueIf true, any table that is found to be corrupt after replaying its wal at startup will automatically be truncated so that the table becomes operable. If false, the user will be responsible for resolving the issue via sql REPAIR TABLE or similar.
synchronous_compressionfalseCompress data in memory immediately (or in the background) when compression is applied to an existing table's column(s) via /alter/table
persist_sync_time5The maximum time in seconds a secondary column store persist data file can be out of sync with memory. Set to a very high number to disable forced syncing.
load_vectors_on_starton_demand

Startup data-loading scheme:

  • always : load as much of the stored data as possible into memory before accepting requests
  • lazy : load the necessary data to start, and load as much of the remainder of the stored data as possible into memory lazily
  • on_demand : only load data as requests use it
build_pk_index_on_startalways
build_materialized_views_on_startalways
max_auto_view_updators3Maximum number of active automatic view updators
sms_max_open_files128Maximum number of open files (per-TOM) for the SMS (string) store
chunk_size8000000Number of records per chunk (0 disables chunking)
chunk_column_max_memory512000000Maximum data size for any one column in a chunk to be stored in memory in bytes. (512 MB) (0 = disable)
chunk_max_memory-1Maximum total chunk data size for all columns in a table in bytes. (-1 default) -1 = default size based on tier.ram limits or host memory per rank, up to 8 GB, 0 = disable
execution_modedevice

Determines whether to execute kernels on host (CPU) or device (GPU). Possible values are:

  • default : engine decides
  • host : execute only host
  • device : execute only device
  • <rows> : execute on the host if chunked column contains the given number of rows or fewer; otherwise, execute on device.
shadow_cube_enabledtrueWhether or not to enable chunk caching
shadow_agg_size500000000The maximum number of bytes in the shadow aggregate cache
shadow_filter_size500000000The maximum number of bytes in the shadow filter cache


Monitoring / Statistics

Configuration ParameterDefault ValueDescription
enable_stats_servertrueRun a statistics server to collect information about Kinetica and the machines it runs on.
event_server_internaltrueUse the internal event and metric server on the head host server if true, otherwise use the Kagent services. Note that Kagent installs will automatically set this value to false.
event_server_address${gaia.host0.address}Event collector server address and port.
event_server_port9080
enable_promtailfalseEnable running promtail to parse and send logs to the event_server_address.
alertmanager_address${gaia.event_server_address}Alertmanager server address and port.
alertmanager_port9089
fluentbit_address Fluentbit TCP server address and port with a json format parser.
fluentbit_port5170
telm.persist_query_metricstrueStore query metrics in a persistent table. If disabled metrics will still be available for point-in-time export.


Procs

Configuration ParameterDefault ValueDescription
enable_procsfalseEnable procs (UDFs)
proc_directory/opt/gpudb/procsDirectory where proc files are stored at runtime. Must be a fully qualified path with execute permission. Python virtual environment installations are also persisted in this directory and can be referenced by executing procs.
proc_data_directory/opt/gpudb/procsDirectory where data transferred to and from procs is written. Must be a fully qualified path with sufficient free space for required volume of data.


Graph Servers

One or more graph servers can be created across available hosts.


Global Parameters

Configuration ParameterDefault ValueDescription
enable_graph_serverfalseEnable/Disable all graph operations
graph.head_port8100Port used for responses from the graph server to the database server.


Server-Specific Parameters

Any number of graph servers may be configured, starting with server0, and must be specified in consecutive order (e.g., server0, server1, server2).

Server settings are defined as follows:

graph.server<#>.<parameter>
Configuration ParameterDefault ValueDescription
graph.server0.hosthost0

Valid parameter names include:

  • host : Host on which the graph server process will be run.
  • port : Ports used for requests from the database server to the graph server(s). Either all ports (one per graph server) should be specified or a single starting value that will be incremented for each subsequent graph.
  • ram_limit : Maximum RAM memory (bytes) a server can use at any given time Default is 0, which uses rank ram tier limits to limit memory See the Tiered Storage section for tier limit details

Example of two graph servers, server1 has a specified RAM limit:

graph.head_port = 8099
graph.server0.host = host2
graph.server1.host = host4
graph.server0.ram_limit = 0
graph.server1.ram_limit = 500000000
graph.server0.ram_limit0


Etcd

Configuration ParameterDefault ValueDescription
etcd_urls List of accessible etcd server URLs.
etcd_auth_user Encrypted login credential for Etcd at given URLs.


HA

Enable/Disable HA from here. All other parameters will be in etcd at the address specified in the relevant section above.

Configuration ParameterDefault ValueDescription
enable_hafalseEnable HA.


Machine Learning (ML)

Configuration ParameterDefault ValueDescription
enable_mlfalseEnable the ML server.
ml_api_port9187default ML API service port number


Alerts

Configuration ParameterDefault ValueDescription
enable_alertstrueEnable the alerting system.
alert_exe Executable to run when an alert condition occurs. This executable will only be run on rank0 and does not need to be present on other nodes.
alert_host_statustrueTrigger an alert whenever the status of a host or rank changes.
alert_host_status_filterfatal_init_errorOptionally, filter host alerts for a comma-delimited list of statuses. If a filter is empty, every host status change will trigger an alert.
alert_rank_statustrueTrigger an alert whenever the status of a rank changes.
alert_rank_status_filterfatal_init_error, not_responding, terminatedOptionally, filter rank alerts for a comma-delimited list of statuses. If a filter is empty, every rank status change will trigger an alert.
alert_rank_cuda_errortrueTrigger an alert if a CUDA error occurs on a rank.
alert_rank_fallback_allocatortrue

Trigger alerts when the fallback allocator is employed; e.g., host memory is allocated because GPU allocation fails.

Note

To prevent a flooding of alerts, if a fallback allocator is triggered in bursts, not every use will generate an alert.

alert_error_messagestrueTrigger generic error message alerts, in cases of various significant runtime errors.
alert_memory_percentage20, 10, 5, 1Trigger an alert if available memory on any given node falls to or below a certain threshold, either absolute (number of bytes) or percentage of total memory. For multiple thresholds, use a comma-delimited list of values.
alert_memory_absolute 
alert_disk_percentage20, 10, 5, 1Trigger an alert if available disk space on any given node falls to or below a certain threshold, either absolute (number of bytes) or percentage of total disk space. For multiple thresholds, use a comma-delimited list of values.
alert_disk_absolute 
alert_max_stored_alerts100The maximum number of triggered alerts guaranteed to be stored at any given time. When this number of alerts is exceeded, older alerts may be discarded to stay within the limit.
trace_directory${gaia.temp_directory}Directory where the trace event and summary files are stored. Must be a fully qualified path with sufficient free space for required volume of data.
trace_event_buffer_size1000000The maximum number of trace events to be collected


Postgres Proxy Server

Start an Postgres(TCP) server as a proxy to handle postgres wire protocol messages.

Configuration ParameterDefault ValueDescription
postgres_proxy.port5432TCP port that the postgres proxy server will listen on if enable_postgres_proxy is true.
postgres_proxy.min_threads2Set min number of server threads to spawn. (default: 2)
postgres_proxy.max_threads64Set max number of server threads to spawn. (default: 64)
postgres_proxy.max_queued_connections1Set max number of queued server connections. (default: 1)
postgres_proxy.idle_connection_timeout300Set idle connection timeout in seconds. (default: 300)
postgres_proxy.keep_alivefalseKeep connections alive between requests
postgres_proxy.sslfalseSet to true to use SSL; if true then ssl_key_file and ssl_cert_file must be provided
postgres_proxy.ssl_key_file 

Files containing the SSL private Key and the SSL certificate for. If required, a self signed certificate (expires after 10 years) can be generated via the command:

openssl req -newkey rsa:2048 -new -nodes -x509 \
  -days 3650 -keyout key.pem -out cert.pem
postgres_proxy.ssl_cert_file 


SQL Engine

Configuration ParameterDefault ValueDescription
sql.enable_plannertrueEnable Query Planner
sql.planner.addressipc://${gaia.temp_directory}/gpudb-query-engine-0

The network URI for the query planner to start. The URI can be either TCP or IPC. TCP address is used to indicate the remote query planner which may run at other hosts. The IPC address is for a local query planner.

Example for remote or TCP servers:

sql.planner.address  = tcp://127.0.0.1:9293
sql.planner.address  = tcp://HOST_IP:9293

Example for local IPC servers:

sql.planner.address  = ipc:///tmp/gpudb-query-engine-0
sql.planner.remote_debug_port0

Remote debugger port used for the query planner. Setting the port to 0 disables remote debugging.

Note

Recommended port to use is 5005

sql.planner.max_memory4096The maximum memory for the query planner to use in Megabytes.
sql.planner.max_stack6The maximum stack size for the query planner threads to use in Megabytes.
sql.planner.timeout120Query planner timeout in seconds
sql.planner.workers16Max Query planner threads
sql.results.cachingtrueEnable query results caching
sql.results.cache_ttl60TTL of the query cache results table
sql.force_binary_joinsfalsePerform joins between only 2 tables at a time; default is all tables involved in the operation at once
sql.force_binary_set_opsfalsePerform unions/intersections/exceptions between only 2 tables at a time; default is all tables involved in the operation at once
sql.plan_cache_size4000The maximum number of entries in the SQL plan cache. The default is 4000 entries, but the configurable range is 1 - 1000000. Plan caching will be disabled if the value is set outside of that range.
sql.result_cache_size4000Maximum number of entries in the SQL result cache. The default is 4000
sql.rule_based_optimizationtrueEnable rule-based query rewrites
sql.cost_based_optimizationtrueEnable the cost-based optimizer
sql.distributed_joinstrueEnable distributed joins
sql.distributed_operationstrueEnable distributed operations
sql.native_semi_joinstrueEnable SQL native semi-joins
sql.parallel_executiontrueEnable parallel query evaluation
sql.max_parallel_steps4Max parallel steps
sql.paging_table_ttl20TTL of the paging results table
sql.max_view_nesting_levels16Max allowed view nesting levels. Valid range (1-64)
sql.metadata_workers16Max Query metadata helper threads


AI

Enable RAG

Configuration ParameterDefault ValueDescription
ai.api.providersqlgptAI API provider type.
ai.api.urlhttps://sqlgpt.ioAI API URL. The default is https://sqlgpt.io/sql/suggest
ai.api.key AI API key.
ai.api.embeddings_modelsqlgptAI Embedding model name.
ai.api.connection_timeout60AI API connection timeout in seconds


External Files

Configuration ParameterDefault ValueDescription
external_files_directory${gaia.persist_directory}Defines the directory from which external files can be loaded
external_file_reader_num_tasks-1Maximum number of simultaneous threads allocated to a given external file read request, on each rank. Note that thread allocation may also be limited by resource group limits, or system load.
egress_single_file_max_size10000Max file size (in MB) to allow saving to a single file. May be overridden by target limitations.
egress_parquet_compressionsnappyParquet files compression type
kafka.batch_size1000Maximum number of records to be ingested in a single batch
kafka.wait_time30Maximum wait time (seconds) to buffer records received from kafka before ingestion
kafka.poll_timeout0Maximum time (milliseconds) for each poll to get records from kafka
system_metadata.stats_retention_days21System metadata catalog settings
system_metadata.stats_aggr_rowcount10000
system_metadata.stats_aggr_time1
system_metadata.retention_period604800For persistent metadata tables, time in seconds to retain rows prior to deletion. NB records are deleted periodically so the retention_period is the minimum lifetime of a given record.


Tiered Storage

Defines system resources using a set of containers (tiers) in which data can be stored, either on a temporary (memory) or permanent (disk) basis. Tiers are defined in terms of their maximum capacity and water mark thresholds. The general format for defining tiers is:

tier.<tier_type>.<config_level>.<parameter>

where tier_type is one of the five basic types of tiers:

  • vram : GPU memory
  • ram : Main memory
  • disk : Disk cache
  • persist : Permanent storage
  • cold : Extended long-term storage

Each tier can be configured on a global or per-rank basis, indicated by the config_level:

  • default : global, applies to all ranks, must be accessible by all hosts
  • rank<#> : local, applies only to the specified rank, overriding any global default

If a field is not specified at the rank<#> level, the specified default value applies. If neither is specified, the global system defaults will take effect, which vary by tier.

The parameters are also tier-specific and will be listed in their respective sections, though every tier, except Cold Storage, will have the following:

  • limit : [ -1, 1 .. N ] (bytes)
  • high_watermark : [ 1 .. 100 ] (percent)
  • low_watermark : [ 1 .. 100 ] (percent)

Note

To disable watermark-based eviction, set the low_watermark and high_watermark values to 100. Watermark-based eviction is also ignored if the tier limit is set to -1 (no limit).


Global Tier Parameters

Configuration ParameterDefault ValueDescription
tier.global.concurrent_wait_timeout600Timeout in seconds for subsequent requests to wait on a locked resource
tier.global.defer_cache_object_evictions_to_disktrueIf true, the tier manager will prioritize the eviction of persistent objects within a given tier priority.


VRAM Tier

The VRAM Tier is composed of the memory available in one or multiple GPUs per host machine.

A default memory limit and eviction thresholds can be set for CUDA-enabled devices across all ranks, while one or more ranks may be configured to override those defaults.

The general format for VRAM settings:

tier.vram.[default|rank<#>].all_gpus.<parameter>
Configuration ParameterDefault ValueDescription
tier.vram.default.all_gpus.limit-1

Valid parameter names include:

  • limit : The maximum VRAM (bytes) per rank that can be allocated on GPU(s) across all resource groups. Default is -1, signifying to reserve 95% of the available GPU memory at startup.
  • high_watermark : VRAM percentage used eviction threshold. Once memory usage exceeds this value, evictions from this tier will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% memory usage threshold.
  • low_watermark : VRAM percentage used recovery threshold. Once memory usage exceeds the high_watermark, evictions will continue until memory usage falls below this recovery threshold. Default is 80, signifying an 80% memory usage threshold.
tier.vram.default.all_gpus.high_watermark90
tier.vram.default.all_gpus.low_watermark80


RAM Tier

The RAM Tier represents the RAM available for data storage per rank.

The RAM Tier is NOT used for small, non-data objects or variables that are allocated and deallocated for program flow control or used to store metadata or other similar information; these continue to use either the stack or the regular runtime memory allocator. This tier should be sized on each machine such that there is sufficient RAM left over to handle this overhead, as well as the needs of other processes running on the same machine.

A default memory limit and eviction thresholds can be set across all ranks, while one or more ranks may be configured to override those defaults.

If any ranks on a host has the default memory limit (-1), their maximum allocation will be calculated as follows:

  • Head Node Rank0: 10% of system memory Other ranks: 70% of system memory / # worker ranks
  • Worker Node Worker ranks: 80% of system memory / # worker ranks

The general format for RAM settings:

tier.ram.[default|rank<#>].<parameter>
Configuration ParameterDefault ValueDescription
tier.ram.default.limit-1

Valid parameter names include:

  • limit : The maximum RAM (bytes) per rank that can be allocated across all resource groups. Default is -1, signifying to automatically set the maximum capacity as a portion of total system memory or the host limit.
  • high_watermark : RAM percentage used eviction threshold. Once memory usage exceeds this value, evictions from this tier will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% memory usage threshold.
  • low_watermark : RAM percentage used recovery threshold. Once memory usage exceeds the high_watermark, evictions will continue until memory usage falls below this recovery threshold. Default is 80, signifying an 80% memory usage threshold.
tier.ram.default.high_watermark90
tier.ram.default.low_watermark80
tier.ram.rank0.limit-1The maximum RAM (bytes) for processing data at rank 0. Overrides the overall default RAM tier limit.


Disk Tier

Disk Tiers are used as temporary swap space for data that doesn't fit in RAM or VRAM. The disk should be as fast or faster than the Persist Tier storage since this tier is used as an intermediary cache between the RAM and Persist Tiers. Multiple Disk Tiers can be defined on different disks with different capacities and performance parameters. No Disk Tiers are required, but they can improve performance when the RAM Tier is at capacity.

A default storage limit and eviction thresholds can be set across all ranks for a given Disk Tier, while one or more ranks within a Disk Tier may be configured to override those defaults.

The general format for Disk settings:

tier.disk<#>.[default|rank<#>].<parameter>

Multiple Disk Tiers may be defined such as disk, disk0, disk1, ... etc. to support different tiering strategies that use any one of the Disk Tiers. A tier strategy can have, at most, one Disk Tier. Create multiple tier strategies to use more than one Disk Tier, one per strategy. See tier_strategy parameter for usage.

Configuration ParameterDefault ValueDescription
tier.disk0.default.path${gaia.persist_directory}/diskcache

Valid parameter names include:

  • path : A base directory to use as a swap space for this tier.
  • limit : The maximum disk usage (bytes) per rank for this tier across all resource groups. Default is -1, signifying no limit and ignore watermark settings.
  • high_watermark : Disk percentage used eviction threshold. Once disk usage exceeds this value, evictions from this tier will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% disk usage threshold.
  • low_watermark : Disk percentage used recovery threshold. Once disk usage exceeds the high_watermark, evictions will continue until disk usage falls below this recovery threshold. Default is 80, signifying a 80% disk usage threshold.
  • store_persistent_objects : If true, allow the disk cache to store copies of data even if they are already stored in a persistent tier (persist/cold).

Example default disk cache configuration using disk0:

tier.disk0.default.path = /opt/gpudb/diskcache_0
tier.disk0.default.limit = -1
tier.disk0.default.high_watermark = 90
tier.disk0.default.low_watermark = 80
tier.disk0.default.store_persistent_objects = false
tier.disk0.default.limit-1
tier.disk0.default.high_watermark90
tier.disk0.default.low_watermark80
tier.disk0.default.store_persistent_objectsfalse


Persist Tier

The Persist Tier is a single pseudo-tier that contains data in persistent form that survives between restarts. Although it also is a disk-based tier, its behavior is different from Disk Tiers: data for persistent objects is always present in the Persist Tier (or Cold Storage Tier, if configured), but may not be up-to-date at any given time.

A default storage limit and eviction thresholds can be set across all ranks, while one or more ranks may be configured to override those defaults. The Graph Solver engine may be given its own storage settings; however, it is not tiered, and therefore cannot have limit/watermark settings applied.

The general format for Persist settings:

tier.persist.[default|rank<#>|text<#>|graph<#>].<parameter>

Warning

In general, limits on the Persist Tier should only be set if one or more Cold Storage Tiers are configured. Without a supporting Cold Storage Tier to evict objects in the Persist Tier to, operations requiring space in the Persist Tier will fail when the limit is reached.

Configuration ParameterDefault ValueDescription
tier.persist.default.path${gaia.persist_directory}

Valid parameter names include:

  • path : Base directory to store column and object vectors.
  • storage : The storage volume corresponding to the persist tier, for managed storage volumes. Must be the vol<#> for a configured storage volume. Do not specify a default as each rank and graph server should have their own. storage volumes (unlisted, as there is no default value).
  • limit : The maximum disk usage (bytes) per rank for this tier across all resource groups. Default is -1, signifying no limit and ignore watermark settings.
  • high_watermark : Disk percentage used eviction threshold. Once disk usage exceeds this value, evictions from this tier to cold storage (if configured) will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% disk usage threshold.
  • low_watermark : Disk percentage used recovery threshold. Once disk usage exceeds the high_watermark, evictions will continue until disk usage falls below this recovery threshold. Default is 80, signifying a 80% disk usage threshold.

Note

path and storage are the only applicable parameters for text and graph

Example showing a rank0 configuration:

tier.persist.rank0.path = /opt/data_rank0
tier.persist.rank0.storage = vol0
tier.persist.rank0.limit = -1
tier.persist.rank0.high_watermark = 90
tier.persist.rank0.low_watermark = 80
tier.persist.default.limit-1
tier.persist.default.high_watermark90
tier.persist.default.low_watermark80


Cold Storage Tier

Cold Storage Tiers can be used to extend the storage capacity of the Persist Tier. Assign a tier strategy with cold storage to objects that will be infrequently accessed since they will be moved as needed from the Persist Tier. The Cold Storage Tier is typically a much larger capacity physical disk or a cloud-based storage system which may not be as performant as the Persist Tier storage.

A default storage limit and eviction thresholds can be set across all ranks for a given Cold Storage Tier, while one or more ranks within a Cold Storage Tier may be configured to override those defaults.

Note

If an object needs to be pulled out of cold storage during a query, it may need to use the local persist directory as a temporary swap space. This may trigger an eviction of other persisted items to cold storage due to low disk space condition defined by the watermark settings for the Persist Tier.

The general format for Cold Storage settings:

tier.cold<#>.[default|rank<#>].<parameter>

Multiple Cold Storage Tiers may be defined such as cold, cold0, cold1, ... etc. to support different tiering strategies that use any one of the Cold Storage Tiers. A tier strategy can have, at most, one Cold Storage Tier. Create multiple tier strategies to use more than one Cold Storage Tier, one per strategy. See tier_strategy parameter for usage.

Configuration ParameterDefault ValueDescription
tier.cold0.default.typedisk

Valid parameter names include:

  • type : The storage provider type. Currently supports disk (local/network storage), hdfs (Hadoop distributed filesystem), azure (Azure blob storage), s3 (Amazon S3 bucket) and gcs (Google Cloud Storage bucket).
  • base_path : A base path based on the provider type for this tier.
  • wait_timeout : Timeout in seconds for reading from or writing to this storage provider. This value should ideally be less than the value for tier.global.concurrent_wait_timeout to allow concurrent queries sufficient time to acquire this resource during normal tiering operations with some slack to accomodate the request.
  • connection_timeout : Timeout in seconds for connecting to this storage provider.
  • use_managed_credentials: If true, use cloud provider user settings from the environment. If false, and no credentials are supplied, use anonymous access. This option applies only to azure, gcs, and s3 providers.
  • use_https : This optional field can be used to override the default scheme for the storage endpoint where applicable. If true, use the https scheme (default), otherwise use http.

HDFS-specific parameter names:

  • hdfs_uri : The host IP address & port for the hadoop distributed file system. For example: hdfs://localhost:8020
  • hdfs_principal : The effective principal name to use when connecting to the hadoop cluster.
  • hdfs_use_kerberos : Set to true to enable Kerberos authentication to an HDFS storage server. The credentials of the principal are in the file specified by the hdfs_kerberos_keytab parameter. Note that Kerberos's kinit command will be run when the database is started.
  • hdfs_kerberos_keytab : The Kerberos keytab file used to authenticate the gpudb Kerberos principal.

Amazon S3-specific parameter names:

  • s3_bucket_name
  • s3_region (optional)
  • s3_endpoint (optional)
  • s3_aws_access_key_id (optional)
  • s3_aws_secret_access_key (optional)
  • s3_aws_role_arn (optional)
  • s3_encryption_type : This is optional and valid values are sse-s3 (Encryption key is managed by Amazon S3) and sse-kms (Encryption key is managed by AWS Key Management Service (kms)).
  • s3_kms_key_id : This is optional and must be specified when encryption type is sse-kms.
  • s3_encryption_customer_algorithm : This is optional and must be specified when encryption type is sse-c.
  • s3_encryption_customer_key : This is optional and must be specified when encryption type is sse-c.
  • s3_use_virtual_addressing : If true (default), S3 endpoints will be constructed using the virtual style which includes the bucket name as part of the hostname. Set to false to use the path style which treats the bucket name as if it is a path in the URI.
  • s3_verify_ssl : Set to false for testing purposes or when it's necessary to get pass TLS errors. (e.g. self-signed certificates). This value is true by default.

Note

If s3_aws_access_key_id and/or s3_aws_secret_access_key values are not specified, they may instead be provided by the AWS CLI or via the respective AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

Microsoft Azure-specific parameter names:

  • azure_container_name
  • azure_storage_account_name
  • azure_endpoint (optional) Specifies access to an Azure Private Link service.
  • azure_storage_account_key (optional) An Azure 'Access key' linked to a Storage account.
  • azure_sas_token (optional) A Shared Access Signature token.

Google Cloud Storage-specific parameter names:

  • gcs_bucket_name
  • gcs_project_id (optional)
  • gcs_service_account_id (optional)
  • gcs_service_account_private_key (optional)
  • gcs_service_account_keys (optional)
  • gcs_endpoint (optional) Override the default rest endpoint

Note

If the gcs_service_account_id, gcs_service_account_private_key and/or gcs_service_account_keys values are not specified, the Google Clould Client Libraries will attempt to find and use service account credentials from the GOOGLE_APPLICATION_CREDENTIALS environment variable.

tier.cold0.default.base_path/mnt/gpudb/cold_storage
tier.cold0.default.wait_timeout90
tier.cold0.default.connection_timeout1


Tier Strategy

Configuration ParameterDefault ValueDescription
tier_strategy.defaultVRAM 1, RAM 5, DISK0 5, PERSIST 5

Default strategy to apply to tables or columns when one was not provided during table creation. This strategy is also applied to a resource group that does not specify one at time of creation.

The strategy is formed by chaining together the tier types and their respective eviction priorities. Any given tier may appear no more than once in the chain and the priority must be in range 1 - 10, where 1 is the lowest priority (first to be evicted) and 9 is the highest priority (last to be evicted). A priority of 10 indicates that an object is unevictable.

Each tier's priority is in relation to the priority of other objects in the same tier; e.g., RAM 9, DISK2 1 indicates that an object will be the highest evictable priority among objects in the RAM Tier (last evicted), but that it will be the lowest priority among objects in the Disk Tier named disk2 (first evicted). Note that since an object can only have one Disk Tier instance in its strategy, the corresponding priority will only apply in relation to other objects in Disk Tier instance disk2.

See the Tiered Storage section for more information about tier type names.

Format:

<tier1> <priority>, <tier2> <priority>, <tier3> <priority>, ...

Examples using a Disk Tier named disk2 and a Cold Storage Tier cold0:

vram 3, ram 5, disk2 3, persist 10
vram 3, ram 5, disk2 3, persist 6, cold0 10
tier_strategy.predicate_evaluation_interval60Predicate evaluation interval (in minutes) - indicates the interval at which the tier strategy predicates are evaluated


Default Resource Group

Resource groups are used to enforce simultaneous memory, disk and thread usage limits for all users within a given group. Users not assigned to a specific resource group will be placed within this default group. Tier-based limits are applied on top of existing rank tier limits.

Configuration ParameterDefault ValueDescription
resource_group.default.schedule_priority50The scheduling priority for this group's operations, 1 - 100, where 1 is the lowest priority and 100 is the highest
resource_group.default.max_tier_priority10The maximum eviction priority for tiered objects, 1 - 10. This supercedes any priorities that are set by any user provided tiering strategies.
resource_group.default.max_cpu_concurrency-1Maximum number of concurrent data operations; minimum is 4; -1 for no limit
resource_group.default.vram_limit-1The maximum memory (bytes) this group can use at any given time in the VRAM tier; -1 for no limit
resource_group.default.ram_limit-1The maximum memory (bytes) this group can use at any given time in the RAM tier; -1 for no limit
resource_group.default.data_limit-1The maximum memory (bytes) this group can cumulatively use in the RAM tier; -1 for no limit


Storage Volumes

When persisted rank data is stored on attached external storage volumes, the following config entries are used to automatically attach and mount the volume upon migration of a rank to another host. Attaching and mounting is performed by the script specified by np1.storage_api_script.

The general format for storage volumes:

storage.volumes.vol<#>.<parameter>

Volumes are numbered starting at zero and match the number of ranks plus additional volumes to match graph servers.

Valid parameter names include:

  • fs_uuid : The UUID identifier for the volume on the cloud provider.
  • id : The cloud identifier of the volume.
  • mount_point : The local path to mount the cloud volume. The folder must exist and be owned by gpudb.

Example:

storage.volumes.vol0.fs_uuid = my_vol_uuid
storage.volumes.vol0.id = /subscriptions/my_az_subscription_uuid/resourceGroups/my_az_rg/providers/Microsoft.Compute/
storage.volumes.vol0.mount_point = /opt/data_rank0


KiFS

Settings for KiFS, Kinetica's global file storage. There are two ways to configure KiFS:

  1. (Default) Leave the settings empty. This runs KiFS in the Persist Tier.
  2. Configure cold storage to host KiFS.

For option 2, the settings are identical to those of a cold tier as previously described in the Cold Storage Tier section, except prefixed by kifs instead of tier.cold<#>.rank<#>. All cold tier types are supported. If the disk type is used, the base path must be mounted on all hosts.

Example configuration for disk:

kifs.type = disk
kifs.base_path = /opt/gpudb/kifs
kifs.wait_timeout = 10
kifs.connection_timeout = 30

Example configuration for AWS S3:

kifs.type = s3
kifs.base_path = gpudb/
kifs.wait_timeout = 10
kifs.connection_timeout = 30
kifs.s3_bucket_name = my-kinetica-bucket-name
kifs.s3_aws_access_key_id = my-aws-access-key
kifs.s3_aws_secret_access_key = my-aws-secret-key
kifs.s3_aws_role_arn = my-aws-role-arn
kifs.s3_encryption_type = my-s3-encryption-type
kifs.s3_kms_key_id = my-kms-key-id
kifs.s3_encryption_customer_algorithm = my-encryption-algorithm
kifs.s3_encryption_customer_key = my-encryption-key
kifs.use_managed_credentials = false

See Cold Storage Tier section for examples of Azure, Google Cloud Storage and HDFS settings

Configuration ParameterDefault ValueDescription
kifs.options.directory_data_limit4294967296The default maximum capacity to apply when creating a KiFS directory (bytes); -1 for no limit." Changing this limit is not retroactive, thus will not apply to existing KiFS directories.