Terms aggregation
editTerms aggregation
editA multi-bucket value source based aggregation where buckets are dynamically built - one per unique value.
Example:
GET /_search { "aggs": { "genres": { "terms": { "field": "genre" } } } }
Response:
{ ... "aggregations": { "genres": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ { "key": "electronic", "doc_count": 6 }, { "key": "rock", "doc_count": 3 }, { "key": "jazz", "doc_count": 2 } ] } } }
an upper bound of the error on the document counts for each term, see below |
|
when there are lots of unique terms, Elasticsearch only returns the top terms; this number is the sum of the document counts for all buckets that are not part of the response |
|
the list of the top buckets, the meaning of |
By default, the terms
aggregation will return the buckets for the top ten terms ordered by the doc_count
. One can
change this default behaviour by setting the size
parameter.
The field
can be Keyword, Numeric, ip
, boolean
,
or binary
.
Size
editThe size
parameter can be set to define how many term buckets should be returned out of the overall terms list. By
default, the node coordinating the search process will request each shard to provide its own top size
term buckets
and once all shards respond, it will reduce the results to the final list that will then be returned to the client.
This means that if the number of unique terms is greater than size
, the returned list is slightly off and not accurate
(it could be that the term counts are slightly off and it could even be that a term that should have been in the top
size buckets was not returned).
If you want to retrieve all terms or all combinations of terms in a nested terms
aggregation
you should use the Composite aggregation which
allows to paginate over all possible terms rather than setting a size greater than the cardinality of the field in the
terms
aggregation. The terms
aggregation is meant to return the top
terms and does not allow pagination.
Shard Size
editThe higher the requested size
is, the more accurate the results will be, but also, the more expensive it will be to
compute the final results (both due to bigger priority queues that are managed on a shard level and due to bigger data
transfers between the nodes and the client).
The shard_size
parameter can be used to minimize the extra work that comes with bigger requested size
. When defined,
it will determine how many terms the coordinating node will request from each shard. Once all the shards responded, the
coordinating node will then reduce them to a final result which will be based on the size
parameter - this way,
one can increase the accuracy of the returned terms and avoid the overhead of streaming a big list of buckets back to
the client.
shard_size
cannot be smaller than size
(as it doesn’t make much sense). When it is, Elasticsearch will
override it and reset it to be equal to size
.
The default shard_size
is (size * 1.5 + 10)
.
Document count error
editdoc_count
values for a terms
aggregation may be approximate. As a result,
any sub-aggregations on the terms
aggregation may also be approximate.
To calculate doc_count
values, each shard provides its own top terms and
document counts. The aggregation combines these shard-level results to calculate
its final doc_count
values. To measure the accuracy of doc_count
values, the
aggregation results include the following properties:
-
sum_other_doc_count
- (integer) The total document count for any terms not included in the results.
-
doc_count_error_upper_bound
-
(integer) The highest possible document count for any single term not included
in the results. If
0
,doc_count
values are accurate.
Per bucket document count error
editTo get the doc_count_error_upper_bound
for each term, set
show_term_doc_count_error
to true
:
GET /_search { "aggs": { "products": { "terms": { "field": "product", "size": 5, "show_term_doc_count_error": true } } } }
This shows an error value for each term returned by the aggregation which represents the worst case error in the document count
and can be useful when deciding on a value for the shard_size
parameter. This is calculated by summing the document counts for
the last term returned by all shards which did not return the term.
These errors can only be calculated in this way when the terms are ordered by descending document count. When the aggregation is ordered by the terms values themselves (either ascending or descending) there is no error in the document count since if a shard does not return a particular term which appears in the results from another shard, it must not have that term in its index. When the aggregation is either sorted by a sub aggregation or in order of ascending document count, the error in the document counts cannot be determined and is given a value of -1 to indicate this.
Order
editThe order of the buckets can be customized by setting the order
parameter. By default, the buckets are ordered by
their doc_count
descending. It is possible to change this behaviour as documented below:
Sorting by ascending _count
or by sub aggregation is discouraged as it increases the
error on document counts.
It is fine when a single shard is queried, or when the field that is being aggregated was used
as a routing key at index time: in these cases results will be accurate since shards have disjoint
values. However otherwise, errors are unbounded. One particular case that could still be useful
is sorting by min
or
max
aggregation: counts will not be accurate
but at least the top buckets will be correctly picked.
Ordering the buckets by their doc _count
in an ascending manner:
GET /_search { "aggs": { "genres": { "terms": { "field": "genre", "order": { "_count": "asc" } } } } }
Ordering the buckets alphabetically by their terms in an ascending manner:
GET /_search { "aggs": { "genres": { "terms": { "field": "genre", "order": { "_key": "asc" } } } } }
Deprecated in 6.0.0.
Use _key
instead of _term
to order buckets by their term
Ordering the buckets by single value metrics sub-aggregation (identified by the aggregation name):
GET /_search { "aggs": { "genres": { "terms": { "field": "genre", "order": { "max_play_count": "desc" } }, "aggs": { "max_play_count": { "max": { "field": "play_count" } } } } } }
Ordering the buckets by multi value metrics sub-aggregation (identified by the aggregation name):
GET /_search { "aggs": { "genres": { "terms": { "field": "genre", "order": { "playback_stats.max": "desc" } }, "aggs": { "playback_stats": { "stats": { "field": "play_count" } } } } } }
Pipeline aggs cannot be used for sorting
Pipeline aggregations are run during the reduce phase after all other aggregations have already completed. For this reason, they cannot be used for ordering.
It is also possible to order the buckets based on a "deeper" aggregation in the hierarchy. This is supported as long
as the aggregations path are of a single-bucket type, where the last aggregation in the path may either be a single-bucket
one or a metrics one. If it’s a single-bucket type, the order will be defined by the number of docs in the bucket (i.e. doc_count
),
in case it’s a metrics one, the same rules as above apply (where the path must indicate the metric name to sort by in case of
a multi-value metrics aggregation, and in case of a single-value metrics aggregation the sort will be applied on that value).
The path must be defined in the following form:
AGG_SEPARATOR = '>' ; METRIC_SEPARATOR = '.' ; AGG_NAME = <the name of the aggregation> ; METRIC = <the name of the metric (in case of multi-value metrics aggregation)> ; PATH = <AGG_NAME> [ <AGG_SEPARATOR>, <AGG_NAME> ]* [ <METRIC_SEPARATOR>, <METRIC> ] ;
GET /_search { "aggs": { "countries": { "terms": { "field": "artist.country", "order": { "rock>playback_stats.avg": "desc" } }, "aggs": { "rock": { "filter": { "term": { "genre": "rock" } }, "aggs": { "playback_stats": { "stats": { "field": "play_count" } } } } } } } }
The above will sort the artist’s countries buckets based on the average play count among the rock songs.
Multiple criteria can be used to order the buckets by providing an array of order criteria such as the following:
GET /_search { "aggs": { "countries": { "terms": { "field": "artist.country", "order": [ { "rock>playback_stats.avg": "desc" }, { "_count": "desc" } ] }, "aggs": { "rock": { "filter": { "term": { "genre": "rock" } }, "aggs": { "playback_stats": { "stats": { "field": "play_count" } } } } } } } }
The above will sort the artist’s countries buckets based on the average play count among the rock songs and then by
their doc_count
in descending order.
In the event that two buckets share the same values for all order criteria the bucket’s term value is used as a tie-breaker in ascending alphabetical order to prevent non-deterministic ordering of buckets.
Minimum document count
editIt is possible to only return terms that match more than a configured number of hits using the min_doc_count
option:
GET /_search { "aggs": { "tags": { "terms": { "field": "tags", "min_doc_count": 10 } } } }
The above aggregation would only return tags which have been found in 10 hits or more. Default value is 1
.
Terms are collected and ordered on a shard level and merged with the terms collected from other shards in a second step. However, the shard does not have the information about the global document count available. The decision if a term is added to a candidate list depends only on the order computed on the shard using local shard frequencies. The min_doc_count
criterion is only applied after merging local terms statistics of all shards. In a way the decision to add the term as a candidate is made without being very certain about if the term will actually reach the required min_doc_count
. This might cause many (globally) high frequent terms to be missing in the final result if low frequent terms populated the candidate lists. To avoid this, the shard_size
parameter can be increased to allow more candidate terms on the shards. However, this increases memory consumption and network traffic.
shard_min_doc_count
editThe parameter shard_min_doc_count
regulates the certainty a shard has if the term should actually be added to the candidate list or not with respect to the min_doc_count
. Terms will only be considered if their local shard frequency within the set is higher than the shard_min_doc_count
. If your dictionary contains many low frequent terms and you are not interested in those (for example misspellings), then you can set the shard_min_doc_count
parameter to filter out candidate terms on a shard level that will with a reasonable certainty not reach the required min_doc_count
even after merging the local counts. shard_min_doc_count
is set to 0
per default and has no effect unless you explicitly set it.
Setting min_doc_count
=0
will also return buckets for terms that didn’t match any hit. However, some of
the returned terms which have a document count of zero might only belong to deleted documents or documents
from other types, so there is no warranty that a match_all
query would find a positive document count for
those terms.
When NOT sorting on doc_count
descending, high values of min_doc_count
may return a number of buckets
which is less than size
because not enough data was gathered from the shards. Missing buckets can be
back by increasing shard_size
.
Setting shard_min_doc_count
too high will cause terms to be filtered out on a shard level. This value should be set much lower than min_doc_count/#shards
.
Script
editUse a runtime field if the data in your documents doesn’t exactly match what you’d like to aggregate. If, for example, "anthologies" need to be in a special category then you could run this:
GET /_search { "size": 0, "runtime_mappings": { "normalized_genre": { "type": "keyword", "script": """ String genre = doc['genre'].value; if (doc['product'].value.startsWith('Anthology')) { emit(genre + ' anthology'); } else { emit(genre); } """ } }, "aggs": { "genres": { "terms": { "field": "normalized_genre" } } } }
Which will look like:
{ "aggregations": { "genres": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ { "key": "electronic", "doc_count": 4 }, { "key": "rock", "doc_count": 3 }, { "key": "electronic anthology", "doc_count": 2 }, { "key": "jazz", "doc_count": 2 } ] } }, ... }
This is a little slower because the runtime field has to access two fields
instead of one and because there are some optimizations that work on
non-runtime keyword
fields that we have to give up for for runtime
keyword
fields. If you need the speed, you can index the
normalized_genre
field.
Filtering Values
editIt is possible to filter the values for which buckets will be created. This can be done using the include
and
exclude
parameters which are based on regular expression strings or arrays of exact values. Additionally,
include
clauses can filter using partition
expressions.
Filtering Values with regular expressions
editGET /_search { "aggs": { "tags": { "terms": { "field": "tags", "include": ".*sport.*", "exclude": "water_.*" } } } }
In the above example, buckets will be created for all the tags that has the word sport
in them, except those starting
with water_
(so the tag water_sports
will not be aggregated). The include
regular expression will determine what
values are "allowed" to be aggregated, while the exclude
determines the values that should not be aggregated. When
both are defined, the exclude
has precedence, meaning, the include
is evaluated first and only then the exclude
.
The syntax is the same as regexp queries.
Filtering Values with exact values
editFor matching based on exact values the include
and exclude
parameters can simply take an array of
strings that represent the terms as they are found in the index:
GET /_search { "aggs": { "JapaneseCars": { "terms": { "field": "make", "include": [ "mazda", "honda" ] } }, "ActiveCarManufacturers": { "terms": { "field": "make", "exclude": [ "rover", "jensen" ] } } } }
Filtering Values with partitions
editSometimes there are too many unique terms to process in a single request/response pair so it can be useful to break the analysis up into multiple requests. This can be achieved by grouping the field’s values into a number of partitions at query-time and processing only one partition in each request. Consider this request which is looking for accounts that have not logged any access recently:
GET /_search { "size": 0, "aggs": { "expired_sessions": { "terms": { "field": "account_id", "include": { "partition": 0, "num_partitions": 20 }, "size": 10000, "order": { "last_access": "asc" } }, "aggs": { "last_access": { "max": { "field": "access_date" } } } } } }
This request is finding the last logged access date for a subset of customer accounts because we
might want to expire some customer accounts who haven’t been seen for a long while.
The num_partitions
setting has requested that the unique account_ids are organized evenly into twenty
partitions (0 to 19). and the partition
setting in this request filters to only consider account_ids falling
into partition 0. Subsequent requests should ask for partitions 1 then 2 etc to complete the expired-account analysis.
Note that the size
setting for the number of results returned needs to be tuned with the num_partitions
.
For this particular account-expiration example the process for balancing values for size
and num_partitions
would be as follows:
-
Use the
cardinality
aggregation to estimate the total number of unique account_id values -
Pick a value for
num_partitions
to break the number from 1) up into more manageable chunks -
Pick a
size
value for the number of responses we want from each partition - Run a test request
If we have a circuit-breaker error we are trying to do too much in one request and must increase num_partitions
.
If the request was successful but the last account ID in the date-sorted test response was still an account we might want to
expire then we may be missing accounts of interest and have set our numbers too low. We must either
-
increase the
size
parameter to return more results per partition (could be heavy on memory) or -
increase the
num_partitions
to consider less accounts per request (could increase overall processing time as we need to make more requests)
Ultimately this is a balancing act between managing the Elasticsearch resources required to process a single request and the volume of requests that the client application must issue to complete a task.
Partitions cannot be used together with an exclude
parameter.
Multi-field terms aggregation
editThe terms
aggregation does not support collecting terms from multiple fields
in the same document. The reason is that the terms
agg doesn’t collect the
string term values themselves, but rather uses
global ordinals
to produce a list of all of the unique values in the field. Global ordinals
results in an important performance boost which would not be possible across
multiple fields.
There are three approaches that you can use to perform a terms
agg across
multiple fields:
- Script
- Use a script to retrieve terms from multiple fields. This disables the global ordinals optimization and will be slower than collecting terms from a single field, but it gives you the flexibility to implement this option at search time.
-
copy_to
field -
If you know ahead of time that you want to collect the terms from two or more
fields, then use
copy_to
in your mapping to create a new dedicated field at index time which contains the values from both fields. You can aggregate on this single field, which will benefit from the global ordinals optimization. -
multi_terms
aggregation - Use multi_terms aggregation to combine terms from multiple fields into a compound key. This also disables the global ordinals and will be slower than collecting terms from a single field. It is faster but less flexible than using a script.
Collect mode
editDeferring calculation of child aggregations
For fields with many unique terms and a small number of required results it can be more efficient to delay the calculation of child aggregations until the top parent-level aggs have been pruned. Ordinarily, all branches of the aggregation tree are expanded in one depth-first pass and only then any pruning occurs. In some scenarios this can be very wasteful and can hit memory constraints. An example problem scenario is querying a movie database for the 10 most popular actors and their 5 most common co-stars:
GET /_search { "aggs": { "actors": { "terms": { "field": "actors", "size": 10 }, "aggs": { "costars": { "terms": { "field": "actors", "size": 5 } } } } } }
Even though the number of actors may be comparatively small and we want only 50 result buckets there is a combinatorial explosion of buckets
during calculation - a single actor can produce n² buckets where n is the number of actors. The sane option would be to first determine
the 10 most popular actors and only then examine the top co-stars for these 10 actors. This alternative strategy is what we call the breadth_first
collection
mode as opposed to the depth_first
mode.
The breadth_first
is the default mode for fields with a cardinality bigger than the requested size or when the cardinality is unknown (numeric fields or scripts for instance).
It is possible to override the default heuristic and to provide a collect mode directly in the request:
GET /_search { "aggs": { "actors": { "terms": { "field": "actors", "size": 10, "collect_mode": "breadth_first" }, "aggs": { "costars": { "terms": { "field": "actors", "size": 5 } } } } } }
When using breadth_first
mode the set of documents that fall into the uppermost buckets are
cached for subsequent replay so there is a memory overhead in doing this which is linear with the number of matching documents.
Note that the order
parameter can still be used to refer to data from a child aggregation when using the breadth_first
setting - the parent
aggregation understands that this child aggregation will need to be called first before any of the other child aggregations.
Nested aggregations such as top_hits
which require access to score information under an aggregation that uses the breadth_first
collection mode need to replay the query on the second pass but only for the documents belonging to the top buckets.
Execution hint
editThere are different mechanisms by which terms aggregations can be executed:
-
by using field values directly in order to aggregate data per-bucket (
map
) -
by using global ordinals of the field and allocating one bucket per global ordinal (
global_ordinals
)
Elasticsearch tries to have sensible defaults so this is something that generally doesn’t need to be configured.
global_ordinals
is the default option for keyword
field, it uses global ordinals to allocates buckets dynamically
so memory usage is linear to the number of values of the documents that are part of the aggregation scope.
map
should only be considered when very few documents match a query. Otherwise the ordinals-based execution mode
is significantly faster. By default, map
is only used when running an aggregation on scripts, since they don’t have
ordinals.
Please note that Elasticsearch will ignore this execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.
Missing value
editThe missing
parameter defines how documents that are missing a value should be treated.
By default they will be ignored but it is also possible to treat them as if they
had a value.
Mixing field types
editWhen aggregating on multiple indices the type of the aggregated field may not be the same in all indices.
Some types are compatible with each other (integer
and long
or float
and double
) but when the types are a mix
of decimal and non-decimal number the terms aggregation will promote the non-decimal numbers to decimal numbers.
This can result in a loss of precision in the bucket values.
Troubleshooting
editFailed Trying to Format Bytes
editWhen running a terms aggregation (or other aggregation, but in practice usually terms) over multiple indices, you may get an error that starts with "Failed trying to format bytes…". This is usually caused by two of the indices not having the same mapping type for the field being aggregated.
Use an explicit value_type
Although it’s best to correct the mappings, you can work around this issue if
the field is unmapped in one of the indices. Setting the value_type
parameter
can resolve the issue by coercing the unmapped field into the correct type.
GET /_search { "aggs": { "ip_addresses": { "terms": { "field": "destination_ip", "missing": "0.0.0.0", "value_type": "ip" } } } }