Kinetica installation and configuration instructions.
Contents
Operating system, hardware, and network requirements to run Kinetica.
CPU Platform | Linux Distribution | Versions |
---|---|---|
x86 | RHEL | 6.x, 7.x |
x86 | Centos | 6.x, 7.x |
x86 | Ubuntu | 14.x LTS , 16.x LTS |
x86 | SUSE | 11 SP3, 11 SP4 |
x86 | Debian | 8.x |
ppc64le | RHEL | 7.2 |
ppc64le | Centos | 6.x, 7.x |
ppc64le | Ubuntu | 14.04 LTS , 16.x LTS |
Kinetica runs on the following 32 or 64-bit Linux-based operating systems.
OS | Supported Versions |
---|---|
Amazon AMI | 2012 |
CentOS | 6, 7 |
Fedora | 14+ |
RedHat | 6, 7 |
SUSE Linux Enterprise | 11+ |
Ubuntu | 12+ |
Component | Specification |
---|---|
CPU | Two socket based server with at least 8 cores Intel x86-64, Power PC 8le, or ARM processor |
GPU | Nvidia K20, K40, K80, P100, GTX Ti780, Tegra, or similar |
Memory | Minimum 8GB. Recommended 64-96GB |
Hard Drive | SSD or SATA 7200RPM hard drive with 3X memory capacity |
First, choose the preparation method by OS.
There are some steps that should be followed to set up your network and server configuration before installing Kinetica.
Kinetica can be run on a single server or as a cluster of multiple servers. When run as a cluster, it is important to ensure that the servers within the cluster can communicate with each other, and external computers can also communicate with the necessary services within the cluster.
The default ports used for communication with Kinetica (and between servers, if operating in a cluster) are the following:
Port | Function | Usage |
---|---|---|
2003 | This port must be open to collect the runtime system statistics. | Required Internally |
4000+N | For installations which have the external text search
server enabled and communicating over TCP
(rankN.text_index_address = tcp://… ), there will be
one instance of the text search server listening for each
rank on every server in the cluster. Each of these
daemons will be listening on a port starting at 4000 on
each server and incrementing by one for each additional
rank. |
Optional Internally |
8080 | The Tomcat listener for the administrative web interface | Optional Externally |
8082 | In installations where users need to be authenticated to
access the database, a preconfigured HTTPd instance listens
on this port, which will authenticate incoming HTTP
requests before passing them along to Kinetica. When
authorization is required, all requests to Kinetica
should be sent here, rather than the standard 9191+
ports. |
Optional Externally |
8088 | This is the port on which Kinetica Reveal is exposed. For installations which have this feature enabled, it should be exposed to users. | Optional Externally |
8090 | This is the port on which the instance of Kibana preconfigured to run with Kinetica listens. | Optional Externally |
9001 | Database trigger ZMQ publishing server port. Users of database triggers will need the ability to connect to this port to receive data generated via the trigger. | Optional Externally |
9002 | Table monitor publishing server port. Users of database table monitors will need the ability to connect to this port to receive data generated via the table monitor. | Optional Externally |
9191+N | The primary port(s) used for public and internal Kinetica
communications. There is one port for each rank running
on each server, starting on each server at port 9191
and incrementing by one for each additional rank. These
should be exposed for any system using the Kinetica APIs
without authorization and must be exposed between all
servers in the cluster. For installations where users
should be authenticated, these ports should NOT be
exposed publicly, but still should be exposed between
servers within the cluster. |
Required Internally, Optional Externally |
Kinetica highly encourages that proper firewalls be maintained and used to protect the database and the network at large. A full tutorial on how to properly set up a firewall is beyond the scope of this document, but the following are some best practices and starting points for more research.
All machines connected to the Internet at large should be protected from intrusion. As shown in the list above, there are no ports which are necessarily required to be accessible from outside of a trusted network, so we recommend only opening ports to the Internet and/or untrusted network(s) which are truly needed based on requirements.
There are some common scenarios which can act as guidelines on which ports should be available.
If Kinetica is running on a server where it will be accessible to the Internet
at large, it is our strong suggestion that security and authentication be used
and ports 9191+N
and 8080
are NOT exposed to the public, if
possible. Those ports can potentially allow users to run commands anonymously
and unless security is configured to prevent it, any users connecting to them
will have full control of the database.
For applications in which requests are being made to Kinetica via client APIs
that do not use authentication, the 9191+N
ports should be made available to
the relevant set of servers. For applications using authentication via the
bundled version of httpd, port 8082
should be opened. It is possible to
have both ports open at the same time in cases where anonymous access is
permitted, however the security settings should be carefully set in this case to
ensure that anonymous users do not overstep their bounds.
Additionally, if the API client is using table monitors or triggers, ports
9001
and/or 9002
should also be opened as needed.
In cases where the GUI interfaces to Reveal or Kibana are required, the
8088
(Reveal) and 8090
(Kibana) ports should be made available.
System administrators may wish to have access to the administrative web
interface, in which case port 8080
should be opened, but carefully
controlled.
On RHEL 6:
RHEL 6 uses iptables by default to configure its firewall settings. These
can be updated using the /etc/sysconfig/iptables
file, or, if you have
X Server running, there is also a GUI for editing the firewall that can be run
using the command:
system-config-firewall
On RHEL 7:
RHEL 7 continues to use iptables under the hood, but the preferred way
to interact with iptables was updated to using the firewall-cmd
command or firewall-config
GUI. For example, the following commands
will open up port 8082
publicly:
firewall-cmd --zone=public --add-port=8082/tcp --permanent
firewall-cmd --reload
Each server in the Kinetica cluster should be properly prepared before installing Kinetica.
While every system is unique, there are several system parameters which are generally recommended to be set on every installation.
For optimal performance, the power scaling setting should be set to
performance
in the file
/sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
to disable the
on-demand CPU throttling:
sudo bash -c 'for i in {0..7}; do cpufreq-set -c $i -g performance; done'
Verify that the setting was updated:
cpufreq-info
Transparent Huge Pages are the kernel’s attempt to reduce the overhead of Translation Lookaside Buffer (TLB) lookups by increasing the size of memory pages. This setting is enabled by default, but can lead to sparse memory usage and decreased performance.
sudo sh -c 'echo "never" > /sys/kernel/mm/transparent_hugepage/enabled'
This section will provide instructions on installing Nvidia drivers if the target servers have Nvidia GPUs. If no Nvidia GPUs are present, you may skip forward to Install Kinetica.
Note
Disabling Secure Boot and SELinux may not be necessary for every setup.
The Nvidia drivers are installed by compiling and installing kernel modules. If they are not signed by a trusted source, then you will not be able to use secure boot. Consequently, you will likely want to disable secure boot in the BIOS of your server. To do so, you will need to (re)boot your server and enter the BIOS menus.
Similarly, SELinux tends to interfere with Nvidia driver installation and
should be disabled by editing the /etc/sysconfig/selinux
configuration
file and changing the SELINUX
line to:
SELINUX=disabled
Ensure that the lspci
command is installed (which lists the PCI devices
connected to the server):
sudo yum -y install pciutils
Perform a quick check to determine what Nvidia cards have been installed:
lspci | grep VGA
The output of the lspci
command above should be something similar to:
00:02.0 VGA compatible controller: Intel Corporation 4th Gen ...
01:00.0 VGA compatible controller: Nvidia Corporation ...
If you do not see a line that includes Nvidia
, then the GPU is not properly
installed. Otherwise, you should see the make and model of the GPU devices that
are installed.
The nouveau driver is an alternative to the Nvidia drivers generally
installed on the server. It does not work with CUDA and must be disabled.
The first step is to edit the file at
/etc/modprobe.d/blacklist-nouveau.conf
. Something like:
cat <<EOF | sudo tee /etc/modprobe.d/blacklist-nouveau.conf
blacklist nouveau
options nouveau modeset=0
EOF
On RHEL 6
Backup your grub config:
sudo cp /boot/grub/grub.conf /boot/grub/grub.conf.bak
Edit your grub config and add rdblacklist=nouveau
to the end of any lines
starting with kernel
. For example:
kernel /vmlinuz-... quiet rdblacklist=nouveau
On RHEL 7
Backup your grub config templates:
sudo cp /etc/sysconfig/grub /etc/sysconfig/grub.bak
Then, update your grub config template at /etc/sysconfig/grub
. Add
rd.driver.blacklist=grub.nouveau
to the GRUB_CMDLINE_LINUX
variable.
For example, change:
GRUB_CMDLINE_LINUX="crashkernel=auto ... quiet"
to:
GRUB_CMDLINE_LINUX="crashkernel=auto ... quiet rd.driver.blacklist=grub.nouveau"
Then, rebuild your grub config:
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Backup the old initramfs image, generate a new initramfs image, disable any graphical logins and reboot the server:
sudo mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
sudo dracut /boot/initramfs-$(uname -r).img $(uname -r)
The Nvidia driver will not allow you to install a new driver while X is
open, so if X is enabled, it must first be exited. The simplest way to exit
X is to switch to a TTY console using Ctrl-Alt-F1
, login, and run:
sudo init 3
After that has completed, X may be disabled so that the system does not attempt to start X in the case where the system has rebooted, but the driver has not finished installing. First, determine which graphical login your server uses:
ps aux | grep -v 'grep' | grep 'lightdm|gdm|kdm'
On RHEL 6
Disable the graphical login and reboot as follows (adjust for the login manager which is running):
echo "manual" | sudo tee -a /etc/init/lightdm.override
sudo reboot now
On RHEL 7
Disable the graphical login as follows (adjust for the login manager which is running):
sudo systemctl disable lightdm
sudo reboot now
After the system reboots, it should no longer start up with a graphical login. The graphical login will be re-enabled after completing the Nvidia driver installation.
After the reboot has completed, check to ensure that the nouveau driver has been disabled:
lsmod | grep "nouveau" > /dev/null && echo "WARNING: nouveau still active" || echo "Success!"
If nouveau is still active, then run the following command and repeat the above check to ensure that Nouveau has been removed:
sudo rmmod nouveau
Check if nouveau is installed as an RPM:
rpm -qa | grep xorg-x11-drv-nouveau
If the RPM is installed, then run the following command to uninstall it:
sudo yum remove xorg-x11-drv-nouveau
Several prerequisites should be installed before installing the Nvidia drivers.
Download the EPEL repo:
wget https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm
Install the EPEL repo:
yum install epel-release-7-9.noarch.rpm
Install the dependencies:
sudo yum -y install kernel-devel kernel-headers gcc dkms acpid
Upgrade the kernel and restart the machine:
yum upgrade kernel
sudo reboot now
This section deals with installing the drivers via the *.run
executables
provided by Nvidia.
To download only the drivers, navigate to http://www.nvidia.com/object/unix.html and click the Latest Long Lived Branch version under the appropriate CPU architecture. On the ensuing page, click Download and then click Agree and Download on the page that follows.
Note
The Unix drivers found in the link above are also compatible with all Nvidia Tesla models.
If you'd prefer to download the full driver repository, Nvidia provides a tool to recommend the most recent available driver for your graphics card at http://www.Nvidia.com/Download/index.aspx?lang=en-us.
If you are unsure which Nvidia devices are installed, the lspci
command
should give you that information:
lspci | grep -i "nvidia"
Download the recommended driver executable. Change the file permissions to allow execution:
chmod +x ./NVIDIA-Linux-$(uname -m)-*.run
Run the install. If you are prompted about cryptographic signatures on the kernel module, answer Sign the Kernel Module and then Generate a new key pair. At the end, DO NOT update your X config if it asks. Note that the following attempts to diagnose a common problem where the installer fails to correctly detect and deal with the situation where the kernel has been signed, but signed kernel modules are not required.
grep CONFIG_MODULE_SIG=y /boot/config-$(uname -r) && \
grep "CONFIG_MODULE_SIG_FORCE is not set" /boot/config-$(uname -r) && \
sudo ./NVIDIA-Linux-$(uname -m)-*.run -e || \
sudo ./NVIDIA-Linux-$(uname -m)-*.run
If there are any issues with the installation, the installer should notify you where the log is kept; the default location is usually:
/var/logs/nvidia-installer.log
One common issue with installing the Nvidia driver is that it will fail out because the Nvidia driver taints the kernel. The issue is that the driver is not signed and the default install does not attempt to sign it, but the kernel is expecting a signed driver. If you encounter this error, you should re-run the install in expert mode:
sudo ./nvidia-Linux-<arch>-<version>.run -e
When prompted about cryptographic signatures on the kernel module, answer Sign the Kernel Module and then Generate a new key pair. Again, at the end, make sure to answer No when asked if you want the installer to update your X configuration.
This situation is usually detected during the above install step, but if there are issues, you can run this command separately.
Another issue that may arise is that if the kernel-devel
version and the
system kernel version don't match up, the Nvidia driver install will not
proceed after accepting the license. To fix this issue:
yum update
sudo reboot now
Nvidia has a large readme online at:
http://us.download.nvidia.com/XFree86/Linux-<arch>/<version>/README/index.html
For example, on x86
for version 375.26
, the readme is online at:
http://us.download.nvidia.com/XFree86/Linux-x86_64/375.26/README/index.html.
After the Nvidia drivers are installed, you can test the installation by running the command:
nvidia-smi
Which should return something similar to:
+------------------------------------------------------+
| NVIDIA-SMI 361.42 Driver Version: 361.42 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro K1100M Off | 0000:01:00.0 Off | N/A |
| N/A 44C P0 N/A / N/A | 8MiB / 2047MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
If an error is returned, stating:
Failed to initialize NVML: GPU access blocked by the operating system
there may be multiple versions of the Nvidia drivers on the system. Try running:
rpm -qa | grep -E "cuda|nvidia"
Review any versions listed and remove them as needed. Also run:
locate libnvidia | grep ".so."
Confirm that the files all end with either a 1
or the version of the
Nvidia driver that you installed, for example .375.21
.
If you disabled the X Server to install your Nvidia driver, enable it now. First, check which service is responsible for the X Server:
ps aux | grep -v 'grep' | grep 'lightdm|gdm|kdm'
The following will enable the lightdm
service, for the case where
lightdm
is responsible for the X Server . Adjust for the particular
service running on your server from the above command.
On RHEL 6:
sudo rm -f /etc/init/lightdm.override
On RHEL 7:
sudo systemctl enable lightdm
There are some steps that should be followed to set up your network and server configuration before installing Kinetica.
Kinetica can be run on a single server or as a cluster of multiple servers. When run as a cluster, it is important to ensure that the servers within the cluster can communicate with each other, and external computers can also communicate with the necessary services within the cluster.
The default ports used for communication with Kinetica (and between servers, if operating in a cluster) are the following:
Port | Function | Usage |
---|---|---|
2003 | This port must be open to collect the runtime system statistics. | Required Internally |
4000+N | For installations which have the external text search
server enabled and communicating over TCP
(rankN.text_index_address = tcp://… ), there will be
one instance of the text search server listening for each
rank on every server in the cluster. Each of these
daemons will be listening on a port starting at 4000 on
each server and incrementing by one for each additional
rank. |
Optional Internally |
8080 | The Tomcat listener for the administrative web interface | Optional Externally |
8082 | In installations where users need to be authenticated to
access the database, a preconfigured HTTPd instance listens
on this port, which will authenticate incoming HTTP
requests before passing them along to Kinetica. When
authorization is required, all requests to Kinetica
should be sent here, rather than the standard 9191+
ports. |
Optional Externally |
8088 | This is the port on which Kinetica Reveal is exposed. For installations which have this feature enabled, it should be exposed to users. | Optional Externally |
8090 | This is the port on which the instance of Kibana preconfigured to run with Kinetica listens. | Optional Externally |
9001 | Database trigger ZMQ publishing server port. Users of database triggers will need the ability to connect to this port to receive data generated via the trigger. | Optional Externally |
9002 | Table monitor publishing server port. Users of database table monitors will need the ability to connect to this port to receive data generated via the table monitor. | Optional Externally |
9191+N | The primary port(s) used for public and internal Kinetica
communications. There is one port for each rank running
on each server, starting on each server at port 9191
and incrementing by one for each additional rank. These
should be exposed for any system using the Kinetica APIs
without authorization and must be exposed between all
servers in the cluster. For installations where users
should be authenticated, these ports should NOT be
exposed publicly, but still should be exposed between
servers within the cluster. |
Required Internally, Optional Externally |
Kinetica highly encourages that proper firewalls be maintained and used to protect the database and the network at large. A full tutorial on how to properly set up a firewall is beyond the scope of this document, but the following are some best practices and starting points for more research.
All machines connected to the Internet at large should be protected from intrusion. As shown in the list above, there are no ports which are necessarily required to be accessible from outside of a trusted network, so we recommend only opening ports to the Internet and/or untrusted network(s) which are truly needed based on requirements.
There are some common scenarios which can act as guidelines on which ports should be available.
If Kinetica is running on a server where it will be accessible to the Internet
at large, it is our strong suggestion that security and authentication be used
and ports 9191+N
and 8080
are NOT exposed to the public, if
possible. Those ports can potentially allow users to run commands anonymously
and unless security is configured to prevent it, any users connecting to them
will have full control of the database.
For applications in which requests are being made to Kinetica via client APIs
that do not use authentication, the 9191+N
ports should be made available to
the relevant set of servers. For applications using authentication via the
bundled version of httpd, port 8082
should be opened. It is possible to
have both ports open at the same time in cases where anonymous access is
permitted, however the security settings should be carefully set in this case to
ensure that anonymous users do not overstep their bounds.
Additionally, if the API client is using table monitors or triggers, ports
9001
and/or 9002
should also be opened as needed.
In cases where the GUI interfaces to Reveal or Kibana are required, the
8088
(Reveal) and 8090
(Kibana) ports should be made available.
System administrators may wish to have access to the administrative web
interface, in which case port 8080
should be opened, but carefully
controlled.
On Ubuntu 12 & Debian 8.x (Jessie):
Ubuntu 12 uses iptables by default to configure its firewall settings.
These can be updated using the /etc/sysconfig/iptables
file, or you can
use the iptables
command:
sudo iptables -A INPUT -p tcp --dport 8181 -j ACCEPT
sudo iptables-save
On Ubuntu 14 & 16:
Ubuntu 14 & 16 come with a ufw
(Uncomplicated FireWall) command,
which controls the firewall, for example:
sudo ufw allow 8181
Each server in the Kinetica cluster should be properly prepared before installing Kinetica.
While every system is unique, there are several system parameters which are generally recommended to be set on every installation.
For optimal performance, the power scaling setting should be set to
performance
in the file
/sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
to disable the
on-demand CPU throttling:
sudo bash -c 'for i in {0..7}; do cpufreq-set -c $i -g performance; done'
Verify that the setting was updated:
cpufreq-info
Transparent Huge Pages are the kernel’s attempt to reduce the overhead of Translation Lookaside Buffer (TLB) lookups by increasing the size of memory pages. This setting is enabled by default, but can lead to sparse memory usage and decreased performance.
sudo sh -c 'echo "never" > /sys/kernel/mm/transparent_hugepage/enabled'
This section will provide instructions on installing Nvidia drivers if the target servers have Nvidia GPUs. If no Nvidia GPUs are present, you may skip forward to Install Kinetica.
Ensure that the lspci
command is installed (which lists the PCI devices
connected to the server):
sudo apt-get -y install pciutils
Perform a quick check to determine what Nvidia cards have been installed:
lspci | grep VGA
The output of the lspci
command above should be something similar to:
00:02.0 VGA compatible controller: Intel Corporation 4th Gen ...
01:00.0 VGA compatible controller: Nvidia Corporation ...
If you do not see a line that includes Nvidia
, then the GPU is not properly
installed. Otherwise, you should see the make and model of the GPU devices that
are installed.
The nouveau driver is an alternative to the Nvidia drivers generally
installed on the server. It does not work with CUDA and must be disabled.
The first step is to edit the file at
/etc/modprobe.d/blacklist-nouveau.conf
. Something like:
cat <<EOF | sudo tee /etc/modprobe.d/blacklist-nouveau.conf
blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off
EOF
Then, run the following commands:
echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf
sudo update-initramfs -u
Backup your grub config template:
sudo cp /etc/default/grub /etc/default/grub.bak
Then, update your grub config template at /etc/default/grub
. Add
rd.driver.blacklist=grub.nouveau
and rcutree.rcu_idle_gp_delay=1
to the
GRUB_CMDLINE_LINUX
variable. For example, change:
GRUB_CMDLINE_LINUX="quiet"
to:
GRUB_CMDLINE_LINUX="quiet rd.driver.blacklist=grub.nouveau rcutree.rcu_idle_gp_delay=1"
Then, rebuild your grub config:
sudo grub2-mkconfig -o /boot/grub/grub.cfg
The following prerequisites should be installed before installing the Nvidia drivers:
sudo apt-get -y install linux-headers-$(uname -r) make gcc-4.8
sudo apt-get -y install acpid dkms
Before running the install, you should exit out of any X environment, such as Gnome, KDE or XFCE. To exit the X session, switch to a TTY console using Ctrl-Alt-F1 and then determine whether you are running lightdm or gdm by running:
sudo ps aux | grep "lightdm|gdm|kdm"
Depending on which is running, stop the service, running the following commands (substitute gdm or kdm for lightdm as appropriate):
sudo service lightdm stop
sudo init 3
This section deals with installing the drivers via the *.run
executables
provided by Nvidia.
To download only the drivers, navigate to http://www.nvidia.com/object/unix.html and click the Latest Long Lived Branch version under the appropriate CPU architecture. On the ensuing page, click Download and then click Agree and Download on the page that follows.
Note
The Unix drivers found in the link above are also compatible with all Nvidia Tesla models.
If you'd prefer to download the full driver repository, Nvidia provides a tool to recommend the most recent available driver for your graphics card at http://www.Nvidia.com/Download/index.aspx?lang=en-us.
If you are unsure which Nvidia devices are installed, the lspci
command
should give you that information:
lspci | grep -i "nvidia"
Download the recommended driver executable. Change the file permissions to allow execution:
chmod +x ./NVIDIA-Linux-$(uname -m)-*.run
Run the install. If you are prompted about cryptographic signatures on the kernel module, answer Sign the Kernel Module and then Generate a new key pair. At the end, DO NOT update your X config if it asks. Note that the following attempts to diagnose a common problem where the installer fails to correctly detect and deal with the situation where the kernel has been signed, but signed kernel modules are not required.
grep CONFIG_MODULE_SIG=y /boot/config-$(uname -r) && \
grep "CONFIG_MODULE_SIG_FORCE is not set" /boot/config-$(uname -r) && \
sudo ./NVIDIA-Linux-$(uname -m)-*.run -e || \
sudo ./NVIDIA-Linux-$(uname -m)-*.run
If there are any issues with the installation, the installer should notify you where the log is kept; the default location is usually:
/var/logs/nvidia-installer.log
One common issue with installing the Nvidia driver is that it will fail out because the Nvidia driver taints the kernel. The issue is that the driver is not signed and the default install does not attempt to sign it, but the kernel is expecting a signed driver. If you encounter this error, you should re-run the install in expert mode:
sudo ./nvidia-Linux-<arch>-<version>.run -e
When prompted about cryptographic signatures on the kernel module, answer Sign the Kernel Module and then Generate a new key pair. Again, at the end, make sure to answer No when asked if you want the installer to update your X configuration.
This situation is usually detected during the above install step, but if there are issues, you can run this command separately.
Another issue that may arise is that if the kernel development version and the system kernel version don't match up, the Nvidia driver install will not proceed after accepting the license. To fix this issue:
sudo apt-get update && sudo apt-get install linux-headers-$(uname -r)
sudo reboot now
Nvidia has a large readme online at:
http://us.download.nvidia.com/XFree86/Linux-<arch>/<version>/README/index.html
For example, on x86
for version 375.26
, the readme is online at:
http://us.download.nvidia.com/XFree86/Linux-x86_64/375.26/README/index.html.
After the Nvidia drivers are installed, you can test the installation by running the command:
nvidia-smi
Which should return something similar to:
+------------------------------------------------------+
| NVIDIA-SMI 361.42 Driver Version: 361.42 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro K1100M Off | 0000:01:00.0 Off | N/A |
| N/A 44C P0 N/A / N/A | 8MiB / 2047MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
If an error is returned, stating:
Failed to initialize NVML: GPU access blocked by the operating system
there may be multiple versions of the Nvidia drivers on the system. Try running:
dpkg --list | grep -E "cuda|nvidia"
Review any versions listed and remove them as needed. Also run:
locate libnvidia | grep ".so."
Confirm that the files all end with either a 1
or the version of the
Nvidia driver that you installed, for example .375.21
.
If you had to stop the X Server to install your Nvidia driver, the simplest way to get back into X is to reboot the server:
sudo reboot now
Next, choose the installation method.
The Visual Installer is an executable which runs a simple web server application that will guide the user through Kinetica installation.
The Visual Installer does NOT have to be run on one of the servers which will host the database, but must be run on a server that has SSH access to all of the nodes of the database cluster. Also, all nodes of the cluster must (at least for the purposes of this install) all use the same package management system (yum or apt) and allow SSH as root user with a common root password.
To enable root login for servers which have that disabled, simply edit the
/etc/ssh/sshd_config
and update the PermitRootLogin
line to read:
PermitRootLogin yes
To set the root user's password, run:
sudo passwd root
The Visual Installer is distributed as a gzip tar archive. It can be extracted by running:
tar xf gpudb-installer-*.tgz
This will extract the archive and place all the files into a
gpudb-installer
folder inside the current directory. Then, to start the
installation, run:
gpudb-installer/setup.sh
When the Visual Installer is ready, it will display sample instructions on how to update your firewall to open the port for accessing the installation application.
When you have started the Visual Installer and adjusted the firewall as
needed, open a browser to the URL displayed by the setup.sh
script. If
the Visual Installer is run on the local machine, then you can browse to:
http://localhost:8181/installer
The first step is identifying the nodes of the cluster. If you are installing on an existing cluster, after specifying the head node, the Visual Installer should automatically display the list of nodes configured for the cluster. Nodes can be added and new database versions can be applied to existing clusters using this tool.
After specifying the nodes on which to install, information about each node in the cluster will be displayed, including the host OS, the number of CPUs & GPUs installed, and the amount of memory & hard disk space available.
After the nodes have been confirmed, the installation (or upgrade to) the packaged database version will commence on all specified nodes.
This step will allow you to designate the number of ranks that will run on each node. The first node listed is the head node and will have one rank designated to orchestrating the cluster, accepting incoming requests, and aggregating query results. Ideally, all nodes should have the same number of ranks as GPUs, with the exception of the head node, which will have one more rank than the number of GPUs.
This step will allow you to set the license key if you have been provided one, or generate a license message to send to Kinetica support (support@kinetica.com) if you have not yet requested a license key.
This step will allow you to update the configuration file for Kinetica.
At this point, the install is complete on all the servers in the cluster. You can skip to Starting Kinetica and view the administrative interface for the database; or first, for additional configuration options, see the Configuration Reference.
Multiple servers may be running Kinetica, but there is always one server which is considered the head node (or aggregator node), which receives user requests and parcels them out to the other worker nodes of the system. Kinetica should be installed on this head node first, as it makes installation of the worker nodes easier. First, the install package should be downloaded. Install the package using the standard RPM install procedures for a local package.
On RHEL:
sudo yum install ./gpudb-cuda8-license-x.x.x.rpm
On Debian/Ubuntu:
sudo dpkg -i ./gpudb-cuda8-license-x.x.x.deb
sudo apt-get -f -y install
This installs the package to the directory /opt/gpudb
,
creates a user and group named gpudb
and a home directory in
/home/gpudb
. SSH keys are also created to allow password-less SSH
access between servers for the gpudb
user when configured as a cluster.
This will also register Kinetica as a service under the name gpudb
.
If you have a cluster of servers running Kinetica, you should edit
/opt/gpudb/core/etc/hostsfile
to update the head node IP address and
add the other servers to the list of hosts. The format is one host per line,
and each line is of the format:
<host ip> slots=<number of ranks> max_slots=<number of ranks>
The slots
parameter denotes how many Kinetica processes should run on each
node. Note that slots
and maxslots
must equal each other for a given
host.
There are two major considerations when filling out the hostsfile
.
hostsfile
is the address of the head
HTTP server node that will handle requests. This is also the server that will
be used to start and stop the Kinetica cluster.After the hostsfile
is updated, run the following command to set up a
passwordless SSH from the head node to the worker nodes:
sudo su --command=/opt/gpudb/core/bin/gpudb_hosts_ssh_copy_id.sh - gpudb
At this point, the /opt/gpudb/core/bin/gpudb_hosts_ssh_execute.sh
script
can be run to install Kinetica on all of the worker nodes, providing that
the gpudb user has sudo access. It will copy the installation package to
all nodes and install it. If gpudb does not have sudo access, you may use
any other user with sufficient privileges that is common to all servers, making
sure to place the install package in a folder accessible to the user. The
following commands can be run to perform the installation.
On RHEL:
cp ./gpudb-cuda8-license-x.x.x.rpm /home/gpudb/.
chown gpudb:gpudb /home/gpudb/gpudb-cuda8-license-x.x.x.rpm
sudo su - gpudb
cd /opt/gpudb/core/bin
./gpudb_hosts_ssh_execute.sh "rsync -av --ignore-existing \
$(./gpudb_hosts_addresses.sh | head -n 1):~/gpudb-cuda8-license-x.x.x.rpm \
~/."
./gpudb_hosts_ssh_execute.sh "sudo yum -y install ~/gpudb-cuda8-license-x.x.x.rpm"
On Debian/Ubuntu:
cp ./gpudb-cuda8-license-x.x.x.rpm /home/gpudb/.
chown gpudb:gpudb /home/gpudb/gpudb-cuda8-license-x.x.x.deb
sudo su - gpudb
cd /opt/gpudb/core/bin
./gpudb_hosts_ssh_execute.sh "rsync -av --ignore-existing \
$(./gpudb_hosts_addresses.sh | head -n 1):~/gpudb-cuda8-license-x.x.x.deb \
~/."
./gpudb_hosts_ssh_execute.sh "sudo dpkg -i ~/gpudb-cuda8-license-x.x.x.deb"
./gpudb_hosts_ssh_execute.sh "sudo apt-get -f -y install"
After installing Kinetica on all of the nodes, you should generate a license request file by running the following on the head node:
/opt/gpudb/core/bin/gpudb_keygen > /tmp/kinetica_license_message.txt
Send the resulting /tmp/kinetica_license_message.txt
file to
support@kinetica.com to request your license key.
After receiving your license key, you can edit the configuration file (on the
head node only) at /opt/gpudb/core/etc/gpudb.conf
to configure your
cluster and add the license key. Also, you should ensure that all the various
connectors and subsystems of Kinetica are enabled or disabled according to
your needs.
Some of the more critical parameters in gpudb.conf
to configure are:
A valid license key; received by email, it must be entered for the parameter:
license_key = ...The number of processes should be set to the number of GPUs on the machine plus one extra process for the head-node HTTP server. For example, if your machine has four attached GPUs set the parameter:
number_of_ranks = 5Specify which GPUs should be used by setting the parameters below. Note that the
rank0
(head-node) HTTP server process can and should share the GPU with the first worker rank:rank0.gpu = 0 rank1.taskcalc_gpu = 0 rank2.taskcalc_gpu = 1 rank3.taskcalc_gpu = 2 rank4.taskcalc_gpu = 3Choose a directory to store the data in. Note that you can split where different types of data is stored if required:
persist_directory = /opt/gpudb/persist
For additional configuration options, see the Configuration Reference.
After Kinetica has been installed and configured, it should be started using the command:
/etc/init.d/gpudb start
Using the same command (/etc/init.d/gpudb
), you can also stop
and restart Kinetica.
To validate that Kinetica has been installed and started properly, you can perform the following tests.
To ensure that Kinetica has started (you may have to wait a moment while the system initializes), you can run curl on the head node to check if the server is responding and port is available with respect to any running firewalls:
$ curl localhost:9191
GPUdb is running!
$ curl localhost:9192
GPUdb is running!
...
This test can also be performed on all of the ranks (ports 9191+N
) on all
the servers in the cluster.
You can also run a test to ensure that the API is responding properly. There is an admin simulator project in Python provided with the Python API, which pulls statistics from the Kinetica instance. Running this on the head node, you should see:
$ python /opt/gpudb/api/python/gpudb/gadmin_sim.py
**********************
Total tables: 0
Total top-level tables: 0
Total collections: 0
Total number of elements: 0
Total number of objects: 0
The administrative interface itself can be used to validate that the system is
functioning properly. Simply point your browser to port 8080
of the
head node:
http://localhost:8080
Once you've arrived at the login page, you'll need to change your password using the following steps:
admin
/ admin
, and click
Loginadmin
)!
@
#
$
%
^
&
*
?
_
~
)Important
After the default password is updated, you'll be required to login to access GAdmin from now on.
The log file located at /opt/gpudb/core/logs/gpudb.log
should be the
first place to check for any system errors. Any issues which would prevent
successful start-up of Kinetica will be logged as ERROR
in the log.
Consequently, running the following command will return enough information to
provide a good starting point for further investigation:
grep ERROR /opt/gpudb/core/logs/gpudb.log | head -n 10
Kinetica supports HTTPS as a way to secure communication with the database.
To enable HTTPS, edit the system config file /opt/gpudb/etc/gpudb.conf
specifying true
for the use_https option. In addition, set the
https_key_file and https_cert_file to point to the
appropriate .pem
files.
Should you need to uninstall Kinetica, you'll need to remove the package,
clean-up Kinetica artifacts, remove any user-defined persist
directories, and delete the gpudb
user and group.
Remove the package from your machine
On RHEL:
sudo yum remove <package-name>
On Debian-based:
sudo dpkg -r <package-name>
Clean-up all Kinetica artifacts (for both RHEL and Debian-based):
sudo rm -rf /opt/gpudb
Remove any user-defined persist directories (these directories are set
in /opt/gpudb/core/etc/gpudb.conf
)
Remove the gpudb
user from the machine
On RHEL:
sudo userdel gpudb
On Debian-based:
sudo deluser gpudb
Remove the gpudb
group from the machine:
groupdel gpudb