While any version of Kinetica can be upgraded to the latest version using KAgent, there are several aspects of upgrading that should be considered before doing so, as well as a number of steps that should be taken after installing Kinetica 7.1.
The native API is not backward compatible with prior releases; any clients that make use of these interfaces need to be updated after installation. See Native APIs for instructions on upgrading client APIs. After the upgrade, client code that makes use of those interfaces will also need to be modified, as well as any supporting configuration (Java pom.xml files, Python packages, etc.).
Database connectors (Spark, etc.) make use of the updated native API and ODBC/JDBC drivers when communicating with the database and should also be updated to the latest versions after installation. See Data Connectors for the list of connectors and their respective configuration guides.
The ODBC/JDBC interfaces are not backward compatible with prior releases; any clients that make use of the ODBC or JDBC drivers need to be updated after installation. See ODBC/JDBC for instructions on upgrading drivers.
Kinetica 7.1 deprecates collections in favor of more traditional schemas, which provide namespacing for tables & views in the database. While this change allows for two or more schemas to contain a table or view with the same name, it does have several implications for users upgrading from prior versions.
A table or view can be addressed by qualified name, prefixing the name with the name of its containing schema, separated by a dot; e.g.:
<schema name>.<table/view name>
A table or view referenced without a schema will be looked for in the user's default schema, if assigned; effectively, a reference like this:
is resolved like this:
<default schema name>.<table name>
This scheme can be overridden by specifying a schema to use in the request:
Using the native API, a current_schema option can be passed to the /execute/sql endpoint
In an ODBC/JDBC SQL session, setting the current schema will cause all subsequent calls in the session to use that schema for unqualified tables or views:
SET CURRENT SCHEMA [<schema name>]
Top-Level Tables and the Home & Default Schemas
Within Kinetica 7.1, all tables & views will be required to be contained within a schema. All tables & views created in a previous release that are not contained within a schema will automatically be moved into the new home schema.
Permissions on the moved database objects will be retained, and this home schema will be assigned as the default schema to all existing users. This will ensure that any existing unqualified references to top-level tables or views will still resolve correctly in Kinetica 7.1.
In addition, new users not assigned a default schema explicitly will be assigned this home schema as their default schema. This automatic assignment includes LDAP users created automatically by the system. In this way, users created both before and after an upgrade will have the same default name resolution for pre-upgrade tables & views.
Any user can be assigned a different default schema afterwards, via /alter/user.
Unqualified Within-Schema References
While the use of a default schema can help with references to top-level tables made in Kinetica 7.0, and the setting of a current schema may help with references to individual within-schema tables, some queries may need to be modified to make qualified references to those tables in Kinetica 7.1.
A simple case of this occurs with a join that makes unqualified references to two tables that reside in different schemas. In this case, setting a current schema will not allow unqualified access to both tables. The references will need to be modified to have the schemas made explicit. For instance, consider a join between two tables in different schemas, table_a in schema_a and table_b in schema_b, where the references were previously unqualified:
This will now need qualified schema references:
Any queries of this type will need to make qualified references to their tables, and any views containing queries of this type will need to be modified in the same way and recreated.
A similar case occurs within SQL procedures, which may reference tables/views in multiple schemas. These will also need to be modified to make qualified references to those tables/views and then recreated.
Schemas do not support having a TTL assigned to them. TTLs can be assigned directly to the tables & views within a schema instead.
The set of built-in schemas has changed as shown in the following table.
|7.1 Name||Pre-7.1 Name|
Previous releases of Kinetica supported the application of a protected status to a schema/collection, table, or view, which would prevent the entity (or contained entities, in the case of a collection) from expiring by TTL timeout.
In Kinetica 7.1, this feature is no longer supported. The expiration of tables & views should be managed via direct manipulation of TTL instead.
Kinetica 7.1 continues the move to SQL-92 compliance with closer adherence to the standard than the previous version. This, along with other changes to the core, has narrowed support for some SQL operations, listed below.
As part of the new support for schema namespacing, several changes have been made in the supporting SQL.
The name resolution rules for tables and views not prefixed with a schema name have changed. See above for details.
The home schema will be assigned as the default schema to existing users during the Kinetica 7.1 upgrade process. Users created after upgrade will be assigned the home schema as their default schema, as well. Any user can be assigned a different default schema afterwards, via ALTER USER.
Within the code of distributed UDFs, input & output tables, passed in via /execute/proc, are made available as a map of table name to table object. The table names used as keys into this map are fully-qualified table names in Kinetica 7.1, whether or not they were fully-qualified when specified. Make sure each table name used as a key to look up a table object in either map contains the corresponding schema name prefix, irrespective of the default schema of the user who created or executed the UDF.