Kinetica installation and configuration instructions using KAgent for On-Premise hardware.
Note
Kinetica can be installed on pre-provisioned instances in AWS, Azure, or GCP via KAgent. For offerings provisioned within cloud environments directly, see Cloud-Ready.
System Requirements
Operating system, hardware, and network requirements to run Kinetica.
Certified OS List
CPU Platform | Linux Distribution | Versions |
---|---|---|
x86 | RHEL / CentOS | 7.4+ |
x86 | RHEL / AlmaLinux / RockyLinux | 8.2+ |
x86 | Ubuntu | 18.04 LTS, 20.04 LTS |
x86 | SUSE | 15.3 |
x86-avx512 | RHEL / CentOS | 7.4+ |
x86-avx512 | RHEL / AlmaLinux / RockyLinux | 8.2+ |
x86-avx512 | Ubuntu | 20.04 LTS |
ppc64le | RHEL / CentOS | 7.4+ |
Minimum Hardware Requirements
Component | Specification |
---|---|
CPU | Two socket based server with at least 8 cores Intel (or compatible) x86-64 or Power PC 8le |
GPU | See GPU Driver below for the list of supported GPUs |
Memory | Minimum 8GB |
Hard Drive | SSD or SATA 7200RPM hard drive with 4X memory capacity |
GPU Driver Matrix
Cards
Preferred
The cards below have been tested in large-scale production environments and provide the best performance for the database.
GPU | Driver | Kinetica Package |
---|---|---|
P4/P40/P100 | 470.X (or higher) | gpudb-cuda-license |
V100 | 470.X (or higher) | gpudb-cuda-license |
T4 | 470.X (or higher) | gpudb-cuda-license |
A10/A40/A100 | 470.X (or higher) | gpudb-cuda-license |
Supported
The cards below are supported for Kinetica but should only be used for smaller, testing workloads or as necessary.
GPU | Driver | Kinetica Package |
---|---|---|
750ti | 470.X (or higher) | gpudb-cuda-license |
K20/K40/K80 | 470.X (or higher) | gpudb-cuda-license |
M6/M60 | 470.X (or higher) | gpudb-cuda-license |
Active Directory
If your environment uses Microsoft Active Directory for authentication and there are security processes running on servers that check for and automatically remove accounts that are not registered in Active Directory, the gpudb user must be added to Active Directory as a Linux-type account prior to installing Kinetica.
KAgent Installation
KAgent can be deployed as a RHEL or Debian/Ubuntu installation package on any server inside or outside the cluster. After copying the KAgent package to the target server, deploy it using the standard procedures for a local package:
On RHEL:
sudo yum install ./kagent-<version>.<architecture>.rpm
On Debian/Ubuntu:
sudo apt install ./kagent-<version>.<architecture>.deb
This installs the package to the directory /opt/gpudb/kagent
and
registers and starts the kagent_ui service. KAgent will open port 8081
on the local firewall (if enabled).
Note
If Kinetica is to be installed via KAgent, but managed via command line, the etcd configuration management service will need to be managed separately from the database & system management processes. See Managing Kinetica Services for details.
Kinetica Installation
Installation of Kinetica using KAgent involves the automated deployment of the installation package via either a browser-based UI or console-driven CLI.
Important
A list of the IP addresses for server(s) running Kinetica and additional KAgent instance(s) must be compiled before the installation process. The installation process also requires a license key. To receive a license key, contact support at support@kinetica.com.
KAgent UI
To access the KAgent UI and begin setting up a cluster:
Tip
Review KAgent for more information on KAgent and its features.
Ensure the KAgent service is started:
service kagent_ui status
Browse to the KAgent UI using IP or host name:
http://<kagent-host>:8081/kagent
Optionally, if using custom rings, i.e. not the default ring, click Rings then click Clusters next to the desired ring. See High Availability Architecture for more information about rings and high availability (HA).
Click Add New or Existing Cluster.
Cluster
Enter a name for the cluster. The name cannot contain spaces or underscores.
Optionally, select one or more of the following packages:
Select Core if node(s) in the cluster should have the core database functionality installed on them.
Important
First-time setups should always have Core selected.
Select etcd if node(s) in the cluster should have etcd installed on them.
Important
The etcd package is required for the following:
- cluster status reporting via KAgent and GAdmin
- high availability
- cluster monitoring
Select Graph if a node in the cluster should have the graph server installed on it. See Network Graphs & Solvers Concepts for more information.
Optionally, select to install AAW (Active Analytics Workbench) if a node should have AAW installed on it. See Active Analytics Workbench for more information.
Optionally, select to install KAgent if a node should also have KAgent installed on it. See KAgent for more information.
Optionally, select to install RabbitMQ if setting up a ring for High Availability. Review High Availability Architecture and High Availability Configuration & Management for more information.
For the Install Mode, select either Online (install directly from the online Kinetica repository) or Offline (install from uploaded packages). If Offline is selected, click Upload Packages, then upload a package file for each component or driver desired for the installation.
Important
If performing an offline installation, all necessary dependencies will need to be installed prior to cluster setup.
For the Version, select either CUDA (GPU) or Intel (CPU-only) to determine the package variant to install.
If the Version is set to CUDA, ensure Automatically install Nvidia driver is selected. This will automatically configure the server(s) for an Nvidia GPU driver and install the most compatible driver.
Enter the license key.
Optionally, provide an external files directory for use with external tables to override the default external files directory parameter value.
If AAW is selected to install, select a K8 Setup:
Automatic -- KAgent will install Kubernetes / KubeCTL and upload a default configuration file.
Important
Before installing the embedded Kubernetes cluster, review the Kubernetes Considerations.
Custom -- Upload a configuration file for an already existing Kubernetes installation and provide a public IP address for the server hosting the Kubernetes installation. Note that AAW requires Kubernetes; see Active Analytics Workbench (AAW) Overview for more information.
Click Next.
Deployment
Important
KAgent does not support multiple installations of Kinetica with differing deployment types, e.g., an On Premise cluster and a Microsoft Azure cluster cannot exist in the same instance of KAgent.
Select the On Premise deployment method, and click Next.
Important
If clearing the Open Firewall Ports checkbox, the firewall then must be configured manually to allow the required ports listed in the default ports table. Consult Adjust Firewall for tips on configuring the firewall.
Security
Important
The Security configuration section is only required if Core is being installed.
Enter and confirm an Admin Password. It must meet the password strength requirements.
Important
This is the password used to access Reveal, Active Analytics Workbench (AAW), KAgent, and GAdmin as the default Admin user.
Select an SSL Mode:
- Cert/key setup not required -- Kinetica will not require SSL certificate/key creation/upload and SSL will not be enabled
- User-provided cert/key per node -- user must upload an SSL
certificate and key for each node; Kinetica copies the cert/key pair
to
/opt/gpudb/certs
, enables HTTPD, and configures HTTPD to use HTTPS - Generate self-signed cert/key per node -- KAgent generates
a self-signed certificate and key for each node and places it in
/opt/gpudb/certs
, enables HTTPD, and configures HTTPD to use HTTPS
Select an Authentication type and fill the fields as necessary:
- None -- no authentication or authorization
- LDAP -- configures Kinetica to authenticate via LDAP; requires authentication to connect to the database, enables authorization, enables external authentication, automatically creates users in the database for LDAP users, and automatically grants roles in the database to LDAP users
- Active Directory -- configures Kinetica to authenticate via Microsoft Active Directory; requires authentication to connect to the database, enables authorization, enables external authentication, automatically creates users in the database for Active Directory users, and automatically grants roles in the database to Active Directory users
- Kerberos -- configures Kinetica to authenticate via Kerberos; requires authentication to connect to the database, enables authorization, enables external authentication, automatically creates users in the database for Kerberos users, and automatically grants roles in the database to Kerberos users
Warning
No SSL or authentication is not recommended! For more information on security configurations and settings as well as how to manually configure Kinetica for a secure setup, see Security Configuration.
Click Next.
Nodes
Click Add New Node until there are the desired number of nodes that will have Kinetica (and potentially other services) installed on them.
For each node, input a custom Label (hostname is suggested), the Internal IP, and the Public IP.
If the User-provided cert/key per node SSL Mode was selected in Security, an SSL column will be added to the configuration page--click the lock icon in the SSL column to open the SSL Certificate/Key window, where the SSL cert and key, along with an optional public hostname, can be provided. Repeat this for each node.
Optionally, select if each node should have the Core package installed. The Core package contains access to the database and its core components and functionality. Note that if the core package is not installed on a node, that node cannot be designated as the Head Node.
Optionally, select if each node should have the etcd package installed. The etcd package provides a means for each node in the cluster to have a consistent record of statuses and locations for the other node(s) in the cluster. Ensure at least one node will have etcd installed; select additional nodes for redundancy.
Select the desired node for the Head Node using the corresponding radio button. This server will receive user requests and parcel them out to the other worker nodes of the system. The head node of the cluster (or only node in a single-node system) will also be used for the administration of the cluster, and by default, the hosting of Reveal and GAdmin and as such, will require special handling during the installation process.
Note
All services and privileges (Head, Graph, AAW, etc.) can exist on a single node if desired, assuming there are enough resources to handle it.
If the Graph package was selected for install in Cluster, select the desired node(s) to host the graph service using the corresponding radio button. The graph node does not need to have the Core package enabled. Consult Distributed Graph Servers for more information on leveraging multiple graph servers.
If the AAW package was selected for install in Cluster, optionally set the number of reserved GPUs for AAW to use for each node. The AAW service will co-exist with the head node. Note that the AAW package will be automatically installed on every node with the Core package enabled.
Important
Some features of AAW require GPUs to work or have increased performance. Review the AAW documentation for more information.
If the KAgent package was selected for install in Cluster, select the desired node to host the service. The KAgent node does not need to have the Core package enabled.
If the RabbitMQ package was selected for install in Cluster because a High Availability setup is required, select the desired node(s) to have RabbitMQ installed. Ensure at least one node will have RabbitMQ installed if enabling High Availability (HA) for the cluster; select additional nodes to have RabbitMQ installed for redundant queues. A node does not have to host any other services other than RabbitMQ if desired.
Important
In total, an odd number of nodes should be selected for RabbitMQ installation. Kinetica recommends installing RabbitMQ machines that will not have the Core package enabled.
Click Next.
Confirm which IP address KAgent should use to connect to the cluster: Internal or Public.
Credentials
- For the Server SSH Credentials, enter the SSH username and password or upload the SSH private key that will be used to access the node(s).
- Optionally, enter the sudo password.
- Click Verify.
The console will appear showing the log of KAgent interactions as KAgent attempts to access the cluster with the provided credentials and also retrieve information on the hosts, including Kinetica version and configuration (if installed), hostname and IP addresses, OS type, and Nvidia information.
Installation
Review the Installation Summary to ensure there are no validation errors in the information. The highlighted IP address will be the one KAgent uses to connect to the cluster.
Tip
Click CLI Commands to view and/or copy the KAgent command line interface commands that will be run in the background (order is from top to bottom).
Click Install. KAgent will open a window displaying the progress of the installation.
Tip
Click Details next to a step to see stdout and stderr for that step. Click to copy the displayed text.
The installation may take a while as KAgent initializes each node in the
cluster, verifies the cluster, adds a repository, downloads the package, installs
the package to the directory /opt/gpudb
, creates a group named
gpudb, and creates two users (gpudb & gpudb_proc) whose home
directories are located in /home/gpudb
. This will also register two
services: gpudb & gpudb_host_manager.
Important
If Automatic Kubernetes (K8) installation was selected and Kinetica is being installed on a RHEL-based system, KAgent will request permission to disable SELinux on the nodes. Kubernetes cannot be installed otherwise. Click I Agree to continue with the installation; click No to stop the installation and manually disable SELinux.
After a successful installation, if KAgent was also installed on a separate node, one can be redirected to the KAgent on that cluster node. If KAgent was not installed on a separate node, one can be redirected to Kinetica Administration Application (GAdmin).
Important
After the installation, the cluster will be added to KAgent and you'll be logged into KAgent as the admin user for the cluster. After this session is over (via either logging out or session timeout), you'll be required to log into KAgent every time you want to access KAgent features. See Logging In / Out for more information.
Validation
To validate that Kinetica has been installed and started properly, you can perform the following tests.
Curl Test
To ensure that Kinetica has started (you may have to wait a moment while the system initializes), you can run curl on the head node to check if the server is responding and port is available with respect to any running firewalls:
$ curl localhost:9191 Kinetica is running!
API Test
You can also run a test to ensure that the API is responding properly. There is an admin simulator project in Python provided with the Python API, which pulls statistics from the Kinetica instance. Running this on the head node, passing in the appropriate <username> & <password>, you should see:
$ /opt/gpudb/bin/gpudb_python /opt/gpudb/kitools/gadmin_sim.py -u <username> -p <password> --table --summary +-----------------+--------------------------------+----------------------+----------------------+-------+ | Schema | Table/View | Records | Type ID | TTL | +=================+================================+======================+======================+=======+ | SYSTEM | <ALL TABLES/VIEWS> | 1 | | | | SYSTEM | ITER | 1 | UNSET_TYPE_ID | -1 | +-----------------+--------------------------------+----------------------+----------------------+-------+ +---------------------------+----------------------+ | Object Type | Count | +===========================+======================+ | Schemas | 1 | | Tables & Views | 1 | | Records | 1 | | Records + Track Elements | 1 | +---------------------------+----------------------+
GAdmin Status Test
The administrative interface itself can be used to validate that the system is functioning properly. Simply log into GAdmin. Browse to Dashboard to view the status of the overall system and Ranks to view the status breakdown by rank.
Ingest/Read Test
After verifying Kinetica has started and its components work, you should confirm ingesting and reading data works as expected.
- Navigate to the Demo tab on the Cluster page.
- Click Load Sample Data under the NYC Taxi section, then click Load to confirm.
- Once the data is finished loading, click View Loaded Data. The data should be available in the nyctaxi table located in the demo schema.
If Reveal is enabled:
Navigate to:
http://<head-node-ip-address>:8088/
Log into Reveal and change the administration account's default password.
Click NYC Taxi under Dashboards. The default NYC Taxi dashboard should load.
Core Utilities
Kinetica comes packaged with many helpful server and support executables that can be found in /opt/gpudb/core/bin/ and /opt/gpudb/bin. Note that any of the gpudb_hosts_*.sh scripts will operate on the hosts specified in gpudb.conf. Run any of the following with the -h option for usage information.
Important
For most of the utilities that use passwordless SSH, an AWS PEM file can be specified instead using the -i option (with the exception being the gpudb_hosts_persist_* scripts). If passwordless SSH is not setup and no PEM file is specified, you will be prompted for a password on each host.
Environment Configuration and Tools
Some of the most commonly used and important utilities are also available in the /opt/gpudb/bin directory.
Note
This directory also contains the KI Tools suite
Utility / Script | Uses Passwordless SSH | Description |
---|---|---|
gpudb_alter_password | No | Script to change a given user's password |
gpudb_env | No | Utility to run a program and its given arguments after setting the PATH, LD_LIBRARY_PATH, PYTHON_PATH, and others to the appropriate /opt/gpudb/ directories. Use this script or /opt/gpudb/bin/gpudb_python to correctly setup the environment to run Kinetica's packaged Python version. You can also run source /opt/gpudb/core/bin/gpudb_env.sh to have the current environment updated. |
gpudb_pip | Yes | Script to run Kinetica's packaged pip version. Runs on all hosts. This can be used in place of pip, e.g., /opt/gpudb/bin/gpudb_pip install gpudb |
gpudb_python | No | Script to correctly setup the environment to run Kinetica's packaged Python version. This can be used in place of the python command, e.g., /opt/gpubd/bin/gpudb_python my_python_file.py |
gpudb_udf_distribute_thirdparty | No | Utility to mirror the local /opt/gpudb/udf/thirdparty to remote hosts. Creates a dated backup on the remote host before copying |
Helper Scripts
Additional helper scripts and utilities are available in /opt/gpudb/core/bin.
Utility / Script | Uses Passwordless SSH | Description |
---|---|---|
gpudb | No | Run as gpudb user or root. The Kinetica system start/restart/stop/status script |
gpudb_alter_password.py | No | Script to change a given user's password |
gpudb_cluster_cuda | No | Server executable for CUDA clusters. Displays version and configuration information. This should only be run by the gpudb executable (see above). |
gpudb_cluster_intel | No | Server executable for Intel clusters. Displays version and configuration information. This should only be run by the gpudb executable (see above). |
gpudb_conf_parser.py | No | Run using /opt/gpudb/bin/gpudb_python. Utility for parsing the /opt/gpudb/core/etc/gpudb.conf file and printing the settings and values. |
gpudb_config_compare.py | No | Script to compare two configuration files: a "modified" configuration file and a "baseline" configuration file. The script can also merge the files after outputting the diff. The merged file will use the "modified" file's settings values if the "modified" configuration settings match the "baseline" configuration settings; if a setting value is present in the "modified" file but not in the "baseline" file, the "baseline" setting value will be used. Supports .ini, .conf, .config, .py, and .json files. |
gpudb_decrypt.sh | No | Utility for decrypting text encrypted by gpudb_encrypt.sh. See Obfuscating Plain-Text Passwords for details. |
gpudb_disk_mount_azure.sh | No | Utility used for attaching and detaching data volumes for Kinetica clusters running in Microsoft Azure. |
gpudb_encrypt.sh | No | Utility for encrypting text. See Obfuscating Plain-Text Passwords for details. |
gpudb_env.sh | No | Utility to run a program and its given arguments after setting the PATH, LD_LIBRARY_PATH, PYTHON_PATH, and others to the appropriate /opt/gpudb/ directories. Use this script or /opt/gpudb/bin/gpudb_python to correctly setup the environment to setup the environment to run Kinetica's packaged Python version. You can also run source /opt/gpudb/core/bin/gpudb_env.sh to have the current environment updated. |
gpudb_file_integrity_check.py | No | Utility to test the consistency of the /opt/gpudb/persist directory |
gpudb_generate_key.sh | No | Utility for generating an encryption key. See Obfuscating Plain-Text Passwords for details. |
gpudb_host_manager | No | The host daemon process that starts and manages any Kinetica processes. |
gpudb_hosts_addresses.sh | Yes | Prints all the unique hostnames (or IPs) specified in gpudb.conf |
gpudb_hosts_diff_file.sh | Yes | Run as gpudb user or root. Utility to diff a given file from the current machine to the specified destination file on one or more hosts |
gpudb_hosts_logfile_cleanup.sh | Yes | Run as gpudb user or root. Script to delete old log files and optionally keep the last n logs |
gpudb_hosts_persist_clear.sh | Yes | Run as gpudb user or root. Script to clear the database persist files (location specified in gpudb.conf) Important: Only run this while the database is stopped. |
gpudb_hosts_rsync_to.sh | Yes | Run as gpudb user. Script to copy files from this server to the remove servers using rsync |
gpudb_hosts_ssh_copy_id.sh | Yes | Run as gpudb user or root. Script to distribute the gpudb user's public SSH keys to the other hosts defined in gpudb.conf to allow password-less SSH. This script should only be run from the head node. Important: This script should be re-run after changing the host configuration to redistribute the keys |
gpudb_hosts_ssh_execute.sh | Yes | Run as gpudb user or root. Script to execute a program with arguments on all hosts specified in gpudb.conf, e.g., ./gpudb_hosts_ssh_execute.sh "ps aux" or ./gpudb_hosts_ssh_execute.sh "hostname" |
gpudb_hosts_ssh_setup_passwordless.sh | Yes | Script to add an authorized SSH key for a given user across a set of hosts. |
gpudb_keygen | No | Executable to generate and print a machine key. You can use the key to obtain a license from support@kinetica.com |
gpudb_log_plot_job_completed_time.sh | No | Plots job completion time statistics using gnuplot |
gpudb_machine_info.sh | No | Script to print OS config information that affects performance as well as suggestions to improve performance |
gpudb_migrate_persistence.py | No | Utility to migrate data from a local persist directory into the database |
gpudb_nvidia_setup.sh | No | Utility to configure the Nvidia GPU devices for best performance or restore defaults. Root permission is required to change values. Utility reports informational settings and permission errors when run as user |
gpudb_open_files.sh | No | Script to print the files currently open by the database |
gpudb_process_monitor.py | No | Script to check a process list against a matching regular expression and print a log to stdout when the process is started or stopped. The script can also run a program, send emails, and/or SNMP alerts when the process starts or stops. The script can be configured using a configuration file, but note that some settings can be overridden from the command line. |
gpudb_sysinfo.sh | No | More information when run as root. Script to print a variety of information about the system and hardware for debugging. You can also make a .tgz file of the output. Rerun this program as needed to keep records of the system. Use a visual diff program to compare two or more system catalogs |
gpudb_udf_distribute_thirdparty.sh | Yes | Utility to mirror the local /opt/gpudb/udf/thirdparty to remote hosts. Creates a dated backup on the remote host before copying |
gpudb_useradd.sh | No | Script to create the gpudb:gpudb and gpudb_proc:gpudb_proc user:groups and SSH id. This script can be rerun as needed to restore the user:groups and ssh config. Be sure to rerun (on the head node only) gpudb_hosts_ssh_copy_id.sh to redistribute the SSH keys if desired whenever the SSH keys are changed |
Logging
The best way to troubleshoot any issues is by searching through the available logs. For more information on changing the format of the logs, see Custom Logging. Each component in Kinetica has its own log, the location of which is detailed below:
Component | Log Location |
---|---|
Active Analytics Workbench (AAW) (API) | /opt/gpudb/kml/logs/ |
Active Analytics Workbench (AAW) (UI) | /opt/gpudb/kml/ui/logs/ |
etcd Server | /opt/gpudb/etcd/logs/ |
GAdmin (Tomcat) | /opt/gpudb/tomcat/logs/ |
Graph Server | /opt/gpudb/graph/logs/ |
KAgent (Service) | /opt/gpudb/kagent/logs/ |
KAgent (UI) | /opt/gpudb/kagent/ui/logs/ |
Kinetica system logs | /opt/gpudb/core/logs/ |
Reveal | /opt/gpudb/connector/reveal/logs/ |
SQL Engine | /opt/gpudb/sql/logs/ |
Stats Server | /opt/gpudb/kagent/stats/logs/ |
Text Server | /opt/gpudb/text/logs/ |
Additional Configuration
If additional edits to the database's configuration file are desired, e.g.,
UDFs (procs), auditing, etc., the database will need to be stopped and the
file will need to be updated. System configuration is done primarily through the
configuration file /opt/gpudb/core/etc/gpudb.conf
, and while all nodes
in a cluster have this file, only the copy on the head node needs to be
modified. The configuration file can be edited via GAdmin or via a text editor
on the command line.
Important
Only edit the /opt/gpudb/core/etc/gpudb.conf
on the
head node. Editing the file on worker nodes is not supported and may
lead to unexpected results.
Some common configuration options to consider updating:
Enabling auditing
Changing the persist directory
Important
The directory should meet the following criteria:
- Available disk space that is at least 4x memory
- Writable by the gpudb user
- Consist of raided SSDs
- Not be part of a network share or NFS mount
Enabling UDFs (procs)
Adjusting storage tiers and resource groups
To edit the configuration file via GAdmin:
- Log into GAdmin
- Enter admin for the Username
- Enter the Admin Password provided to KAgent for the Password (refer to KAgent UI for more information)
- Click Log In
- Stop the system.
- Navigate to
- Edit the file in the text window.
- Click Update, then click Start Service.
To edit the configuration file via command line:
- Stop the system.
- Open
/opt/gpudb/core/etc/gpudb.conf
in the desired text editor. - Edit and save the file.
- Start the system.
Uninstallation
Should you need to uninstall Kinetica, you'll need to shut down the system, remove the package, and remove related files, directories, & user accounts.
Remove the KAgent and Kinetica packages from your machine
On RHEL:
sudo yum remove kagent.<architecture> sudo yum remove gpudb-<gpuhardware>-<licensetype>.<architecture>
On Debian-based:
sudo dpkg -r kagent.<architecture> sudo dpkg -r gpudb-<gpuhardware>-<licensetype>.<architecture>
Optionally, remove the Active Analytics Workbench package from your machine
On RHEL:
sudo yum remove kinetica-ml.<architecture>
On Debian-based:
sudo dpkg -r kinetica-ml.<architecture>
Remove any user-defined persist directories (these directories are set in
/opt/gpudb/core/etc/gpudb.conf
)Clean-up all Kinetica artifacts (for both RHEL and Debian-based):
sudo rm -rf /opt/gpudb
Remove the
gpudb
&gpudb_proc
users from the machineOn RHEL:
sudo userdel -r gpudb sudo userdel -r gpudb_proc
On Debian-based:
sudo deluser --remove-home gpudb sudo deluser --remove-home gpudb_proc
Remove the
gpudb
group from the machine:groupdel gpudb