Version:

High Availability Installation & Configuration

Pre-requisites

An HA installation requires three components:

  1. Two or more clusters with matching Kinetica installations
  2. A Kinetica HA installation package that matches the cluster installation
  3. Nginx installed on each cluster's head node

Install

Install the HA Plugin RPM or DEB on the head node of each Kinetica cluster. This will create the directory /opt/gpudb/ha.

  • On RHEL:

    yum -y install gpudb-ha-<version>-<release>.x86_64.rpm
    
  • On Debian:

    apt-get install -f gpudb-ha-<version>-<release>.x86_64.deb
    

Configure Core

  1. Shut down all clusters:

    sudo service gpudb stop
    
  2. Change the head node of each cluster to run on port 9192:

    sed -i 's/^head_port.*/head_port = 9192/' /opt/gpudb/core/etc/gpudb.conf
    

Configure HA

The Kinetica HA configuration parameters are in the XML file /opt/gpudb/ha/conf/ha-proc-conf.xml. In this file, child refers to the head node of the cluster being configured, and brother refers to the head nodes of the other clusters. Below are the parameters you will have to change along with sample values for a three cluster HA configuration, with head nodes of each cluster named ha1, ha2, and ha3.

Parameter Purpose ha1 ha2 ha3
child.store URL of this cluster's head node http://127.0.0.1:9192 http://127.0.0.1:9192 http://127.0.0.1:9192
child.store.name Domain name of cluster's head node ha1 ha2 ha3
brothers.stores URLs of the other clusters http://ha2:9192,http://ha3:9192 http://ha1:9192,http://ha3:9192 http://ha1:9192,http://ha2:9192
msgq.url Message queues domain names ha1,ha2,ha3 ha1,ha2,ha3 ha1,ha2,ha3
child.queue.name.create Create queue for this cluster kdb_create_ha1 kdb_create_ha2 kdb_create_ha3
brother.queue.names.create Create queues for the other clusters kdb_create_ha2,kdb_create_ha3 kdb_create_ha1,kdb_create_ha3 kdb_create_ha1,kdb_create_ha2
child.queue.name.add Add queue for this cluster kdb_add_ha1 kdb_add_ha2 kdb_add_ha3
brother.queue.names.add Add queues for the other clusters kdb_add_ha2,kdb_add_ha3 kdb_add_ha1,kdb_add_ha3 kdb_add_ha1,kdb_add_ha2
rabbitmq.user RabbitMQ username gpudb gpudb gpudb
rabbitmq.password RabbitMQ password gpudb123 gpudb123 gpudb123

Configure RabbitMQ

Kinetica HA uses RabbitMQ to sync data and commands between clusters. The configuration for these queues is on the head node of each cluster, at /opt/gpudb/ha/rabbitmq-server/conf. Below is an example of one of these files. The first file you will need to change is rabbitmq.config.

[
        {rabbit,
                [
                        {default_user,        <<"gpudb">>},
                        {default_pass,        <<"gpudb123">>},
                        {cluster_nodes,
                                {[
                                'rabbit@ha1','rabbit@ha2','rabbit@ha3'
                                ], disc}
                        },
                        {loopback_users, []}
%%                      ,{collect_statistics, fine}
%%                      ,{cluster_partition_handling, pause_minority}
%%                      ,{delegate_count, 64}
%%                      ,{hipe_compile, true}
                ]
        }
].

If you are not using the server hostnames as the server name in your queues, you will need to edit the file rabbitmq-env.conf on each cluster head node, and uncomment and change the line

NODENAME=rabbit@<server name>

Start RabbitMQ on each cluster's head node:

service gpudb-ha mq-start

From a browser on each head node, log into that node's respective RabbitMQ page at http://ha1:15672/#/, http://ha2:15672/#/, or http://ha3:15672/#/ and ensure the head node can access the RabbitMQ instance and message queue associated with it. The login information is configured in rabbitmq.config above.

Configure GAdmin

You can centrally manage all Kinetica clusters in an HA configuration through the Kinetica administration application (GAdmin) by editing its configuration files. This application can be run from any of the clusters or on a separate server. Unlike the HA configuration described above, this only needs to be done on one server.

  1. In the file /opt/gpudb/tomcat/webapps/gadmin/js/settings.js, change the HA_ENABLED line to true

    var HA_ENABLED = true;
    
  2. Copy the file ha-config.json (which gets created when gpudb-ha is started) to ha-sys-config.json. If you are running GAdmin from a separate server, you will need to copy this file from the /opt/gpudb/tomcat/webapps/gadmin directory on one of the cluster head nodes.

  3. Add the following lines to the file, editing as appropriate for your configuration. In the example below, we are using GAdmin on ha3 as the HA-configured GAdmin.

    "brother.queue.names.create": "kdb_create_ha1,kdb_create_ha2",
    "child.ha.process.url":"http://127.0.0.1:9192",
    "brothers.ha.process.urls":"http://ha1:9192,http://ha2:9192",
    "child.ha.gadmin.url":"http://ha3:8080/gadmin",
    "brothers.ha.gadmin.urls":"http://ha1:8080/gadmin,http://ha2:8080/gadmin",
    "child.ha.gstats.url":"http://ha3:8080/gstats",
    "brothers.ha.gstats.urls":"http://ha1:8080/gstats,http://ha2:8080/gstats"
    
  4. Start Kinetica HA:

    service gpudb start
    service gpudb-ha start
    
  5. Log into GAdmin on the centrally-managed head node (or each cluster's head node, if not using central management), and perform & verify the following actions though it:

    • Create a table
    • Insert data into the table
    • Read data from the table
    • Delete the table
    • Create a user
    • Delete the user