Kinetica is a very fast, distributed, GPU-accelerated database with advanced filtering, visualization, and aggregation functionality.
From a user's perspective, data in Kinetica is organized in a manner similar to a standard relational database management system (RDBMS). A Kinetica database consists of tables, each defined by a type. The available column types include the standard base types (int, long, float, double, string, & bytes), as well as numerous sub-types supporting date/time, geospatial, and other data forms. The native API interface to the system is that of an object-based datastore, with each object corresponding to a row in a table.
Kinetica provides basic functionality to create tables, add rows, read rows, and delete rows. What really separates Kinetica is its specialized filtering and visualization functions. These functions can be performed through our native API or our ODBC/JDBC connectors, which support a subset of SQL-92. This allows users to integrate Kinetica with third-party GUIs and developers to quickly integrate existing code with Kinetica.
Kinetica has a distributed architecture that has been designed for data processing at scale. A standard cluster consists of identical nodes run on commodity hardware and equipped with GPUs. A single node is chosen to be the head aggregation node.
Kinetica is designed to be highly scalable. A cluster can be scaled up at any time to increase storage capacity and processing power, with near-linear scale processing improvements for most operations. Sharding of data can be done automatically, or specified and optimized by the user.
Kinetica is an ODBC-compatible database, supporting ANSI SQL-92 compliant syntax. Further, its native API can be accessed via RESTful HTTP endpoints using either JSON or Avro Serialization methods. Officially supported and open-source language bindings are provided for Java, Python, JavaScript, C++, and C#. Additional language bindings can be constructed for any language capable of HTTP requests and JSON parsing.
Kinetica also connects seamlessly to a variety of other data processing and analytical frameworks, including Apache Spark, Storm, and NiFi.
Host management services allow the cluster to be brought up & down and to have its status monitored from a single node. Cluster management utilities allow all nodes to be upgraded, modified, & maintained from one location.