7.1 Release Notes

Publish Date: 2022.03.20


Kinetica is now available as a managed application in the AWS Marketplace!

  • Streamlined provisioning process simplifies setup and gets you up and running in 1 hour
  • Pay-as-you-go option allows you to pay for what you use, billed through AWS
  • Infrastructure provisioned into your AWS subscription
  • Kinetica Workbench simplifies SQL analysis and collaboration
  • Easily ingest data from developer's local environment into Kinetica's marketplace instance
  • Easily set up streaming ingestion from Kafka, AWS S3, and Azure blob storage from within Kinetica
  • User-Defined Functions (UDFs), Graphs, ML models, and more can be managed & executed via SQL
  • Integrated performance monitoring with AWS Monitor
  • One-click upgrades

Data and Analytics Capabilities

SQL Analytics

Kinetica supports:

  • Standard SQL data types
  • Standard DDL, DML, subqueries, joins, and set/pivot operations, including date/time, math, & string manipulation functions
  • Logical & materialized variants of views and external tables
  • Out of the box SQL analytics: aggregation, grouping, distribution, window, machine learning, graph, and GIS functions
  • User-Defined Functions (UDF) written in Python, Java, or C++
    • Scalar or table functions
    • Both distributed & non-distributed execution modes
    • Management & execution supported in SQL

Key-Value Lookups

Kinetica can perform high-performance, high-throughput, key/value look-ups from tables and views; available within the C++, C#, Java, and Python native APIs.

Temporal Analytics

Kinetica enables temporal analytics:

  • Date, Time, DateTime, and Timestamp data types
  • Date/time formatting codes to assist with data ingestion
  • Date/Time SQL functions and expression support
  • Inexact temporal joins using the ASOF join function

Geospatial Analytics

Support for ingest of geospatial data from a variety of data formats, including:

  • WKTs within text-based files
  • GeoJSON
  • Shapefiles

Store a variety of geospatial data objects in Kinetica, including: Points, Polygons, Linestrings, Tracks (GPS Positions), and Labels

A huge library of over 130 geospatial SQL functions ported from PostGIS that enable:

  • Spatial filtering
  • Spatial aggregation
  • GPS tracking
  • Geospatial joins
  • Spatial relations
  • Geometry transformation
  • Merge, dissolve, and simplification
  • Measurement
  • Isochrones and isodistance

Tracking streaming GPS data through space and time:

  • Native time-series track tables
  • Special track functions for understanding a track’s relationship to other geospatial objects
  • Track visualization rendering mode that shows the path of travel

Endpoints for server-side and client-side visualization:

  • Web Map Service (WMS) for large-scale, server-side visualization in a variety of styles:
    • Raster
    • Class-break Raster
    • Tracks
    • Heatmap
    • Contours
    • Labels
  • Vector tile service for client-side visualization

Graph Analytics

Model graphs from relational data using an intuitive identifier syntax that maps data to nodes, edges, restrictions, and weights. Weights and restrictions can be statically associated with the graph model or applied at query-time to retain flexibility in your data model. Graphs can be geospatial or non-geospatial (property graph) in nature.

Kinetica supports a wide variety of graph analytic functions, including:

  • Single Source Shortest Path
  • Inverse Shortest Path
  • Multiple Routing (Traveling Salesman)
  • Multiple Supply/Demand
  • Cycle Detection
  • Page Rank
  • Probability Rank
  • All Paths
  • Betweenness Centrality
  • Backhaul Routing
  • Hidden Markov Model (HMM) Map Matching
  • Match Origin/Destination Pairs
  • Model Statistical Analysis

Kinetica supports the ability to query property graphs based on given criteria applied to graph attributes.

Geospatial graphs can be displayed on a map using Kinetica’s Web Map Service (WMS).

ML Analytics

Kinetica supports several built-in SQL functions for common algorithms like linear regression and outlier detection.

Build your own ML models and deploy them in Kinetica:

  • Any model and ML framework is compatible
  • Load the model and invoke inferences with SQL
  • Inference with either batch (static data), or continuous (streaming data) modes
  • Scale inference horsepower by defining the number of replicas, on Kinetica’s infrastructure or your own Kubernetes cluster

Real-time Decisioning

Kinetica enables end-to-end real-time decisioning with streaming data ingestion, high-performance analytics, and data stream egress.

Ingress / Egress

Programmatic Interfaces

Import and export data using one of Kinetica’s programmatic interfaces:

  • SQL
  • Native APIs: Python, Java, C++, C#, NodeJS, JavaScript
  • Drivers: ODBC, JDBC
  • PostgreSQL Wire Protocol

Data Sources

Import data from various sources:

  • Message queue topics on Confluent Kafka, Apache Kafka, and Azure Event Hub
  • SQL and no-SQL databases
  • SaaS data stores
  • CRM (e.g., Sales Force)
  • Marketing (e.g., Eloqua)
  • E-Commerce (e.g., Shopify)
  • Collaboration (e.g., Slack)
  • Files hosted on:
    • Kinetica Server (KiFS)
    • External storage: AWS S3, Azure Blob Store, & HDFS

Import a variety of data formats, including:

  • CSV and other delimited text formats
  • Parquet
  • Shapefiles
  • Avro

Import data server-side:

  • Batch or streaming modes
  • SQL and native API support

Ingest data client-side:

  • Batch import from a text file on the client
  • Upload a local file to the server for later ingestion
  • SQL and native API support

Data Sinks

Export data to:

  • Message queue topics on Confluent Kafka, Apache Kafka, & Azure Event Hub
  • CRM (e.g., Sales Force)
  • Marketing (e.g., Eloqua)
  • E-Commerce (e.g., Shopify)
  • Collaboration (e.g., Slack)

Data Streams

Send streaming output to a data sink to enable real-time decisioning.


Native APIs

Kinetica supports a wide variety of native API languages, including:

  • C++
  • C#
  • Java
  • JavaScript / Node.js
  • Python


Connect to a variety of tools and frameworks with Kinetica’s JDBC and ODBC interfaces.



A CLI client for Kinetica that enables users to:

  • Run queries remotely
  • Insert data into a Kinetica server
  • Upload files to and download files from a Kinetica server


  • Spark
  • NiFi
  • Storm
  • FME
  • R
  • Beam
  • Mapbox



Kinetica deploys a Workbench user interface that comes with:

  • A data object explorer to help you manage all data objects in your system
  • A data import wizard that helps you set up and initiate data loading procedures
  • A file system that allows you to drag and drop files into Kinetica
  • SQL workbooks that store and run SQL commands, and help you visualize data
  • Easy to find connection strings for all APIs and supported connections
  • Management capabilities to monitor and cancel running jobs
  • User administration
  • One-click upgrades and backup snapshots


Kinetica Reveal is a lightweight business intelligence tool that helps you build dashboards with a wide array of data visualizations in Kinetica, secured with role-based access controls (RBAC).


  • SSL encryption
  • Role-based authentication
  • Row-level and column-level security
  • Data masking and obfuscation


  • Kinetica supports complete backups through Workbench, which persists snapshots to cold storage like AWS S3 or Azure Blob Storage.
  • Automatically suspend your cluster when it has been inactive for a set period of time to save operational costs.
  • Intracluster resilience enabled through Kubernetes, which automatically restarts nodes upon crashes


Monitor the performance of Kinetica (metrics and logs) in AWS CloudWatch

Version 7.1.9

Build Date: 2023.04.07


  • SQL-GPT provides support for conversational queries in SQL & Workbench
  • Python UDF support for virtual environments
  • ESRI Geodatabase file support
  • JDBC ingress of streaming data
  • Dictionary encoding for BOOLEAN, INT8, INT16, IPv4, ULONG, & UUID data types
  • Support for ingress of compressed CSV files (gz, bz2)
  • New /get/records/json HTTP GET endpoint for simple record extraction
  • New graph solvers: Louvain, Jaccard, pickup-dropoff optimization
  • Workbench enhancements including multi-layer map blocks and browsing JDBC data sources
  • Enhanced Tableau extension for integrating Kinetica server-side visualization into Tableau


  • Added REGEXP_MATCH function for regular expression support
  • Added support for ~, ~*, !~ and !~* regular expression operators
  • Added support for JSON-formatted query parameters to /execute/sql
  • Added support for showing the dependencies of a view (in dependent order) to SHOW VIEW, using WITH OPTIONS (DEPENDENCIES=TRUE)
  • Added APPROX_MEDIAN & APPROX_PERCENTILE aggregate functions
  • Insertion of a record resulting in a primary key collision now results in an error instead of being silently discarded
  • Added KI_HINT_IGNORE_EXISTING_PK hint for inserting data to discard records with duplicate primary keys
  • Added support for altering multiple columns in a single ALTER TABLE statement
  • Added DOWNLOAD DIRECTORY command to download all files in a KiFS directory
  • Added LIST DIRECTORY command (alias for SHOW FILES)
  • Added support for setting the size and user usage limit of a KiFS directory
  • Added LOAD INTO option kafka_offset_reset_policy = 'latest' for Kafka subscriptions to not start at the beginning
  • Improved Kafka performance and settings
  • Modified partitioned tables to use more in-place data updates
  • Added support for allowing materialized views and SQL procedures to stop updating/running after a specified time or time period
  • Added support for modifying the refresh settings of materialized views and SQL procedures
  • Improved performance of queries containing monotonic functions
  • Added support for parallelizing data loads through JDBC data sources by splitting the remote query on a DATE or DATETIME column


  • Added support for ingest of gzip/bzip2-compressed delimited text files
  • Added type inferencing support for Parquet file ingest
  • Support for reading Avro and Parquet files with missing columns, if the columns are nullable or allow defaults (e.g., init_with_now or init_with_uuid)
  • Added support for the truncate_strings option for external file ingest
  • Added multi-head support for /insert/records/json endpoint
  • Added /get/records/json endpoint to directly retrieve data in JSON format
  • Added /get/file/{kifs-path} endpoint to directly retrieve KiFS files

Geospatial/Network Graph/Visualization

  • Added two new /match/graph solvers, with many new options & identifiers available:
    • match_similarity - computes Jaccard score between a set of input nodes
    • match_pickup_dropoff - for single rider per vehicle Uber-like solve capability
    • match_clusters - Louvain cluster solver; more features coming soon, including support for distributed graphs
  • Several updates to the match_supply_demand solver (MSDO), accessible from the /match/graph endpoint:
    • Added SAMPLE_DEMAND_CUSTOM_ORDER add-on identifier to kick off the permutations from user-provided order of demands
    • Solver can now be run using two optimization quantities, with the new add-on identifiers DEMAND_SIZE2 and SUPPLY_SIZE2, such as weight and volume
    • Fixed the run-to-run consistency
  • Graph mock-up schema returned in the /create/graph response's info field as a DOT graph formatted string, so that D3 libraries can be used to visualize how the graph is constructed; Workbench will have an automatic way of visualizing this content in a future release
  • Geospatial graphs can now be constructed from any WKT type, including MULTILINESTRING and POLYGON types
  • New WMS parameters POINTCOLOR_ATTR and SHAPEFILLCOLOR_ATTR for feature (i.e. raster) rendering, specifying per point/shape colors via a long-valued (ARGB) column or expression


  • C++
    • BOOLEAN type support
  • Java API
    • Bulk ingestion of JSON data
    • Bulk ingestion warnings now available separately from errors
    • Apache HTTP client upgrade (4.5.13 -> 5.2.1)
    • Bypassing server certificate checking will now override any use of a specified trust store
    • Fix for uploading large files
    • Improved bulk ingestion & SSL error handling/logging
  • JavaScript/Node.js API
    • Ingestion of JSON data
  • JDBC
    • BOOLEAN type support
    • Support for uploading a directory of files
    • Support for $ style query parameters
    • Improved support for query parameter type retention
    • Added support for JDBC hints to be specified as connection-string parameters
    • Added BypassSslCertCheck connection string option for turning SSL certificate checking on/off
  • ODBC
    • BOOLEAN type support
    • Added BypassSslCertCheck connection string option for turning SSL certificate checking on/off
  • Python API
    • Failure to connect will now raise an error during GPUdb object construction
    • Added to_df() function for converting result data to a DataFrame
    • Added support for:
      • Username/password in the connection URL
      • Protocol & port overrides
      • Numeric log levels


  • Workbench
    • Improved example workbooks
    • KiFS file download
    • Table export to KiFS
    • Online example workbook refresh/update
    • Big Number workbook visualization
    • Map block supports visualizing multiple WMS layers
    • JDBC wizard for importing from another Kinetica instance
    • JDBC data source table data preview for import
    • JDBC/CDATA data source subscription support for continuous ingress
    • Wizard for importing Open Street Map (OSM) USA road network data into a graph
  • Kinetica Geospatial Analytics extension for Tableau
    • Cross filtering from Kinetica to Tableau
    • Class break rendering
    • Multiple layers
    • Calculated fields

Version 7.1.8

Build Date: 2022.10.16


  • Data egress from Kinetica to CSV and Parquet
  • Boolean data type support
  • Regular expression matching via SQL
  • KIO deprecated; replaced by native ingress & egress capabilities


  • New functions:
    • REGEXP_LIKE - regular expression filtering
  • Geospatial indexes for WKT or latitude/longitude columns
  • Added support for multi-head fast record retrieval with KI_HINT_KEY_LOOKUP hint
  • SQL support for applying the k-means algorithm
  • LIST DATASOURCE command for listing remote tables via data source
  • Materialized views can be moved from one schema to another
  • Support for extracting the epoch from a timestamp using EXTRACT
  • Simplified syntax for loading data from a remote table via LOAD INTO...FROM REMOTE TABLE
  • Improved performance for queries using primary or partition key columns
  • Reduced-memory index for low-cardinality columns


  • Native data egress in CSV & Parquet formats
  • Support for limiting KiFS directory sizes
  • Support for altering the number of CPU & GPU data processors, as well as the maximum number of concurrent running GPU kernels, at runtime

Geospatial/Network Graph/Visualization

  • Graph support for all WKT types, including discontinuous ones
  • Support for running the supply/demand solve as a batch traveling salesman solve via BATCH_TSM_MODE option
  • Support for filtering out demand sites beyond a given distance from a transport's starting location when using the supply/demand solver, via TRUCK_SERVICE_RADIUS
  • Support for a crowd-sourcing type of supply/demand solve, made possible by DEPOT_ID being used as a grouping mechanism and not assumed to be the location of the transports
  • Added optional supply/demand unit unloading penalties via DEMAND_PENALTY & SUPPLY_TRUCK_PENALTY
  • Added optional threshold on supply side sequencing via MAX_SUPPLY_COMBINATIONS
  • Sequencing on the supply side of supply/demand solvers is now multithreaded
  • Support for querying feature information at a given point via the WMS GetFeatureInfo function


  • Java API
    • Boolean type support
    • Added retry handler for requests that fail under heavy load
    • Improved bulk ingestion error handling/reporting
    • Improved support for compiling & running API client code under different Java versions
  • JDBC
    • BLOB/CLOB support
    • Added support for multi-head fast record retrieval with KI_HINT_KEY_LOOKUP hint
    • Added support for disabling multi-head inserts
    • Improved INSERT error handling/reporting
    • Added support for the non-transactional driver being used by JDBC clients requiring transactions
  • Python API
    • Improved HTTP protocol compliance


  • Workbench
    • Added CData import UI
    • Added usage metrics UI
    • Updated example workbooks
    • Enhanced home screen

Version 7.1.7

Build Date: 2022.07.03


  • Kinetica Workbench web interface for developing and collaborating with interactive SQL workbooks
  • JDBC ingress and egress
  • Import and export data to/from other data sources and enterprise applications
  • Reveal filter slice allows additional map layers to be used for filtering base layer
  • New track SQL functions


  • New functions:
  • Other new functions:
  • Added support for ILIKE operator
  • New SQL command to upload a file to KiFS from a URL
  • Support for renaming schemas
  • Support for renaming and moving materialized views
  • Added SQL Support for generating isochrones
  • SQL Procedures can now invoke UDFs
  • Added support for pg_roles and pg_type catalog tables
  • Statistics and errors now saved for data imports and exports
  • Track rendering now supported on joins and materialized views
  • Track rendering now supports alternate ID and timestamp column names
  • Logical views, logical external tables, and catalog tables are now accessible from the native API
  • Materialized views are now updated, when needed, after multi-head inserts
  • PostgreSQL Wire Protocol support now includes Extended Query Protocol
  • KiSQL now supports line editing and line history
  • Improved performance of ingestion and copying of data with init_with_uuid column property
  • Improved UUID generation algorithm to produce more unique values for large batch inserts
  • Added support for chunk-skipping in series partitions, when query includes the partition key
  • Input UUIDs do not require hyphens
  • Support for optional AM/PM in default date format
  • Added LLL (whole milliseconds) and $ (start of optional section) code to TO_CHAR family of functions
  • Updated CAST function to work more consistently with other databases
  • K-means clustering algorithm now supported on CPUs, join views, and materialized views
  • Support for multiple subscriptions to same Kafka topic, if done by different users


  • Google Cloud Storage support
    • Direct ingestion from files in GCS
    • External tables from files in GCS
    • GCS-based cold storage tier
  • JDBC ingress support
    • Direct ingestion from queries on JDBC-accessible databases
    • External tables from queries on JDBC-accessible databases
  • JDBC Egress support
    • Export Kinetica query results into JDBC-accessible databases
  • Broader support for SQL & NoSQL databases and enterprise applications via Kinetica data sources
  • Support for KiFS files in configuration settings and credential, data source, & data sink properties that reference files
  • Support customer-managed keys for AWS S3 access
  • Allow S3 data source or cold storage user to be able specify the server-side encryption method and key
  • Enable S3 IAM role parameter in configuration settings for cold storage

Geospatial/Network Graph/Visualization

  • New /match/graph solver, match_charging_stations, for finding an optimal route involving multiple electric-vehicle charging stations
  • Distributed graph support for /query/graph


  • Java API
    • Added capability to pass in self-signed certificates & passwords as options
    • Updated dependencies to more secure versions
    • Fixed unhandled exception when an Avro encoding error occurs
    • Fixed error where a type's column order would not match a table created from it
    • Removed client-side primary key check, to improve performance and make returned errors more consistently delivered
  • Python API
    • Made the API more Python3 compatible
    • Prevented client hanging when connection IP/URL does not match any known to the server; client will operate in degraded mode (no multi-head, etc.)
    • Removed client-side primary key check, to improve performance and make returned errors more consistently delivered
    • Rectified a formatting issue while building expressions for keyed lookups that was resulting in a failure on Python 2.7.x.
    • Corrected some string/null comparisons
  • C++/C#
    • Removed client-side primary key check, to improve performance and make returned errors more consistently delivered


  • Kinetica Workbench web interface for developing and collaborating with interactive SQL workbooks
  • GAdmin viewer for SVG animation results in Graph Match UI
  • Allow Reveal additional map layers to be used for filtering base layer
  • Allow Reveal map tracks click and highlight
  • Allow Reveal to be embedded in an iframe