Version:

Aggregate Group By

Calculates unique combinations (groups) of values for the given columns in a given table/view/collection and computes aggregates on each unique combination. This is somewhat analogous to an SQL-style SELECT...GROUP BY.

For aggregation details and examples, see Aggregation. For limitations, see Aggregation Limitations.

Any column(s) can be grouped on, and all column types except unrestricted-length strings may be used for computing applicable aggregates; columns marked as store-only are unable to be used in grouping or aggregation.

The results can be paged via the input parameter offset and input parameter limit parameters. For example, to get 10 groups with the largest counts the inputs would be: limit=10, options={"sort_order":"descending", "sort_by":"value"}.

Input parameter options can be used to customize behavior of this call e.g. filtering or sorting the results.

To group by columns 'x' and 'y' and compute the number of objects within each group, use: column_names=['x','y','count(*)'].

To also compute the sum of 'z' over each group, use: column_names=['x','y','count(*)','sum(z)'].

Available aggregation functions are: count(*), sum, min, max, avg, mean, stddev, stddev_pop, stddev_samp, var, var_pop, var_samp, arg_min, arg_max and count_distinct.

Available grouping functions are Rollup, Cube, and Grouping Sets

This service also provides support for Pivot operations.

Filtering on aggregates is supported via expressions using aggregation functions supplied to having.

The response is returned as a dynamic schema. For details see: dynamic schemas documentation.

If a result_table name is specified in the input parameter options, the results are stored in a new table with that name--no results are returned in the response. Both the table name and resulting column names must adhere to standard naming conventions; column/aggregation expressions will need to be aliased. If the source table's shard key is used as the grouping column(s) and all result records are selected (input parameter offset is 0 and input parameter limit is -9999), the result table will be sharded, in all other cases it will be replicated. Sorting will properly function only if the result table is replicated or if there is only one processing node and should not be relied upon in other cases. Not available when any of the values of input parameter column_names is an unrestricted-length string.

Input Parameter Description

Name Type Description
table_name string Name of the table on which the operation will be performed. Must be an existing table/view/collection.
column_names array of strings List of one or more column names, expressions, and aggregate expressions.
offset long A positive integer indicating the number of initial results to skip (this can be useful for paging through the results). The minimum allowed value is 0. The maximum allowed value is MAX_INT.
limit long A positive integer indicating the maximum number of results to be returned Or END_OF_SET (-9999) to indicate that the max number of results should be returned. The default value is 1000.
encoding string

Specifies the encoding for returned records.

Supported Values Description
binary Indicates that the returned records should be binary encoded.
json Indicates that the returned records should be json encoded.
options map of string to strings

Optional parameters. The default value is an empty map ( {} ).

Supported Parameters (keys) Parameter Description
collection_name Name of a collection which is to contain the table specified in result_table. If the collection provided is non-existent, the collection will be automatically created. If empty, then the table will be a top-level table. Additionally this option is invalid if input parameter table_name is a collection.
expression Filter expression to apply to the table prior to computing the aggregate group by.
having Filter expression to apply to the aggregated results.
sort_order

String indicating how the returned values should be sorted - ascending or descending.

Supported Values Description
ascending Indicates that the returned values should be sorted in ascending order.
descending Indicates that the returned values should be sorted in descending order.
sort_by

String determining how the results are sorted.

Supported Values Description
key Indicates that the returned values should be sorted by key, which corresponds to the grouping columns. If you have multiple grouping columns (and are sorting by key), it will first sort the first grouping column, then the second grouping column, etc.
value Indicates that the returned values should be sorted by value, which corresponds to the aggregates. If you have multiple aggregates (and are sorting by value), it will first sort by the first aggregate, then the second aggregate, etc.
result_table The name of the table used to store the results. Has the same naming restrictions as tables. Column names (group-by and aggregate fields) need to be given aliases e.g. ["FChar256 as fchar256", "sum(FDouble) as sfd"]. If present, no results are returned in the response. This option is not available if one of the grouping attributes is an unrestricted string (i.e.; not charN) type.
result_table_persist

If true, then the result table specified in result_table will be persisted and will not expire unless a ttl is specified. If false, then the result table will be an in-memory table and will expire unless a ttl is specified otherwise. The supported values are:

  • true
  • false
result_table_force_replicated

Force the result table to be replicated (ignores any sharding). Must be used in combination with the result_table option. The supported values are:

  • true
  • false
result_table_generate_pk

If 'true' then set a primary key for the result table. Must be used in combination with the result_table option. The supported values are:

  • true
  • false
ttl Sets the TTL of the table specified in result_table.
chunk_size Indicates the chunk size to be used for the result table. Must be used in combination with the result_table option.
create_indexes Comma-separated list of columns on which to create indexes on the result table. Must be used in combination with the result_table option.
view_id view this result table is part of. The default value is ''.
materialize_on_gpu

If true then the columns of the groupby result table will be cached on the GPU. Must be used in combination with the result_table option. The supported values are:

  • true
  • false
pivot pivot column
pivot_values The value list provided will become the column headers in the output. Should be the values from the pivot_column.
grouping_sets Customize the grouping attribute sets to compute the aggregates. These sets can include ROLLUP or CUBE operartors. The attribute sets should be enclosed in paranthesis and can include composite attributes. All attributes specified in the grouping sets must present in the groupby attributes.
rollup This option is used to specify the multilevel aggregates.
cube This option is used to specify the multidimensional aggregates.

Output Parameter Description

Name Type Description
response_schema_str string Avro schema of output parameter binary_encoded_response or output parameter json_encoded_response.
binary_encoded_response bytes Avro binary encoded response.
json_encoded_response string Avro JSON encoded response.
total_number_of_records long Total/Filtered number of records.
has_more_records boolean Too many records. Returned a partial set.