Aggregations

Aggregations can be provided at ingestion time as part of the ingestion spec as a way of summarizing data before it enters Druid.
Aggregations can also be specified as part of many queries at query time.

Available aggregations are:

Count aggregator

count computes the count of Druid rows that match the filters.

{"type":"count","name":<output_name>}

Please note the count aggregator counts the number of Druid rows, which does not always reflect the number of raw events ingested.
This is because Druid can be configured to roll up data at ingestion time. To
count the number of ingested rows of data, include a count aggregator at ingestion time, and a longSum aggregator at
query time.

Sum aggregators

longSum aggregator

computes the sum of values as a 64-bit, signed integer

{"type":"longSum","name":<output_name>,"fieldName":<metric_name>}

name – output name for the summed value
fieldName – name of the metric column to sum over

doubleSum aggregator

Computes and stores the sum of values as 64-bit floating point value. Similar to longSum

{"type":"doubleSum","name":<output_name>,"fieldName":<metric_name>}

floatSum aggregator

Computes and stores the sum of values as 32-bit floating point value. Similar to longSum and doubleSum

{"type":"floatSum","name":<output_name>,"fieldName":<metric_name>}

Min / Max aggregators

doubleMin aggregator

doubleMin computes the minimum of all metric values and Double.POSITIVE_INFINITY

{"type":"doubleMin","name":<output_name>,"fieldName":<metric_name>}

doubleMax aggregator

doubleMax computes the maximum of all metric values and Double.NEGATIVE_INFINITY

{"type":"doubleMax","name":<output_name>,"fieldName":<metric_name>}

floatMin aggregator

floatMin computes the minimum of all metric values and Float.POSITIVE_INFINITY

{"type":"floatMin","name":<output_name>,"fieldName":<metric_name>}

floatMax aggregator

floatMax computes the maximum of all metric values and Float.NEGATIVE_INFINITY

{"type":"floatMax","name":<output_name>,"fieldName":<metric_name>}

longMin aggregator

longMin computes the minimum of all metric values and Long.MAX_VALUE

{"type":"longMin","name":<output_name>,"fieldName":<metric_name>}

longMax aggregator

longMax computes the maximum of all metric values and Long.MIN_VALUE

{"type":"longMax","name":<output_name>,"fieldName":<metric_name>}

First / Last aggregator

(Double/Float/Long) First and Last aggregator cannot be used in ingestion spec, and should only be specified as part of queries.

Note that queries with first/last aggregators on a segment created with rollup enabled will return the rolled up value, and not the last value within the raw ingested data.

doubleFirst aggregator

doubleFirst computes the metric value with the minimum timestamp or 0 if no row exist

{"type":"doubleFirst","name":<output_name>,"fieldName":<metric_name>}

doubleLast aggregator

doubleLast computes the metric value with the maximum timestamp or 0 if no row exist

{"type":"doubleLast","name":<output_name>,"fieldName":<metric_name>}

floatFirst aggregator

floatFirst computes the metric value with the minimum timestamp or 0 if no row exist

{"type":"floatFirst","name":<output_name>,"fieldName":<metric_name>}

floatLast aggregator

floatLast computes the metric value with the maximum timestamp or 0 if no row exist

{"type":"floatLast","name":<output_name>,"fieldName":<metric_name>}

longFirst aggregator

longFirst computes the metric value with the minimum timestamp or 0 if no row exist

{"type":"longFirst","name":<output_name>,"fieldName":<metric_name>}

longLast aggregator

longLast computes the metric value with the maximum timestamp or 0 if no row exist

{"type":"longLast","name":<output_name>,"fieldName":<metric_name>,}

stringFirst aggregator

stringFirst computes the metric value with the minimum timestamp or null if no row exist

JavaScript-based functionality is disabled by default. Please refer to the Druid JavaScript programming guide for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it.

Approximate Aggregations

Cardinality aggregator

Computes the cardinality of a set of Druid dimensions, using HyperLogLog to estimate the cardinality. Please note that this
aggregator will be much slower than indexing a column with the hyperUnique aggregator. This aggregator also runs over a dimension column, which
means the string dimension cannot be removed from the dataset to improve rollup. In general, we strongly recommend using the hyperUnique aggregator
instead of the cardinality aggregator if you do not care about the individual values of a dimension.

Each individual element of the "fields" list can be a String or DimensionSpec. A String dimension in the fields list is equivalent to a DefaultDimensionSpec (no transformations).

The HyperLogLog algorithm generates decimal estimates with some error. "round" can be set to true to round off estimated
values to whole numbers. Note that even with rounding, the cardinality is still an estimate. The "round" field only
affects query-time behavior, and is ignored at ingestion-time.

Cardinality by value

When setting byRow to false (the default) it computes the cardinality of the set composed of the union of all dimension values for all the given dimensions.

"isInputHyperUnique" can be set to true to index pre-computed HLL (Base64 encoded output from druid-hll is expected).
The "isInputHyperUnique" field only affects ingestion-time behavior, and is ignored at query-time.

The HyperLogLog algorithm generates decimal estimates with some error. "round" can be set to true to round off estimated
values to whole numbers. Note that even with rounding, the cardinality is still an estimate. The "round" field only
affects query-time behavior, and is ignored at ingestion-time.