- Elasticsearch Guide: other versions:
- Getting Started
- Setup Elasticsearch
- Breaking changes
- Breaking changes in 5.1
- Breaking changes in 5.0
- Search and Query DSL changes
- Mapping changes
- Percolator changes
- Suggester changes
- Index APIs changes
- Document API changes
- Settings changes
- Allocation changes
- HTTP changes
- REST API changes
- CAT API changes
- Java API changes
- Packaging
- Plugin changes
- Filesystem related changes
- Path to data on disk
- Aggregation changes
- Script related changes
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top hits Aggregation
- Value Count Aggregation
- Bucket Aggregations
- Children Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Metrics Aggregations
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Shrink Index
- Rollover Index
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Shadow replica indices
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Force Merge
- cat APIs
- Cluster APIs
- Query DSL
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Tokenizers
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Compound Word Token Filter
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- Minhash Token Filter
- Character Filters
- Modules
- Index Modules
- Ingest Node
- Pipeline Definition
- Ingest APIs
- Accessing Data in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Fail Processor
- Foreach Processor
- Grok Processor
- Gsub Processor
- Join Processor
- JSON Processor
- Lowercase Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- Dot Expander Processor
- How To
- Testing
- Glossary of terms
- Release Notes
- 5.1.2 Release Notes
- 5.1.1 Release Notes
- 5.1.0 Release Notes
- 5.0.2 Release Notes
- 5.0.1 Release Notes
- 5.0.0 Combined Release Notes
- 5.0.0 GA Release Notes
- 5.0.0-rc1 Release Notes
- 5.0.0-beta1 Release Notes
- 5.0.0-alpha5 Release Notes
- 5.0.0-alpha4 Release Notes
- 5.0.0-alpha3 Release Notes
- 5.0.0-alpha2 Release Notes
- 5.0.0-alpha1 Release Notes
- 5.0.0-alpha1 Release Notes (Changes previously released in 2.x)
WARNING: Version 5.1 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Diversified Sampler Aggregation
editDiversified Sampler Aggregation
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
A filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents. Diversity settings are used to limit the number of matches that share a common value such as an "author".
Example use cases:
- Tightening the focus of analytics to high-relevance matches rather than the potentially very long tail of low-quality matches
- Removing bias from analytics by ensuring fair representation of content from different sources
-
Reducing the running cost of aggregations that can produce useful results using only samples e.g.
significant_terms
Example:
{ "query": { "match": { "text": "iphone" } }, "aggs": { "sample": { "diversified_sampler": { "shard_size": 200, "field" : "user.id" }, "aggs": { "keywords": { "significant_terms": { "field": "text" } } } } } }
Response:
{ ... "aggregations": { "sample": { "doc_count": 1000, "keywords": { "doc_count": 1000, "buckets": [ ... { "key": "bend", "doc_count": 58, "score": 37.982536582524276, "bg_count": 103 }, .... }
1000 documents were sampled in total because we asked for a maximum of 200 from an index with 5 shards. The cost of performing the nested significant_terms aggregation was therefore limited rather than unbounded. |
|
The results of the significant_terms aggregation are not skewed by any single over-active Twitter user because we asked for a maximum of one tweet from any one user in our sample. |
shard_size
editThe shard_size
parameter limits how many top-scoring documents are collected in the sample processed on each shard.
The default value is 100.
Controlling diversity
edit=field
or script
and max_docs_per_value
settings are used to control the maximum number of documents collected on any one shard which share a common value.
The choice of value (e.g. author
) is loaded from a regular field
or derived dynamically by a script
.
The aggregation will throw an error if the choice of field or script produces multiple values for a document. It is currently not possible to offer this form of de-duplication using many values, primarily due to concerns over efficiency.
Any good market researcher will tell you that when working with samples of data it is important that the sample represents a healthy variety of opinions rather than being skewed by any single voice. The same is true with aggregations and sampling with these diversify settings can offer a way to remove the bias in your content (an over-populated geography, a large spike in a timeline or an over-active forum spammer).
Field
editControlling diversity using a field:
{ "aggs" : { "sample" : { "diversified_sampler" : { "field" : "author", "max_docs_per_value" : 3 } } } }
Note that the max_docs_per_value
setting applies on a per-shard basis only for the purposes of shard-local sampling.
It is not intended as a way of providing a global de-duplication feature on search results.
Script
editControlling diversity using a script:
{ "aggs" : { "sample" : { "diversified_sampler" : { "script" : { "lang" : "painless", "inline" : "doc['author'].value + '/' + doc['genre'].value" } } } } }
Note in the above example we chose to use the default max_docs_per_value
setting of 1 and combine author and genre fields to ensure
each shard sample has, at most, one match for an author/genre pair.
execution_hint
editWhen using the settings to control diversity, the optional execution_hint
setting can influence the management of the values used for de-duplication.
Each option will hold up to shard_size
values in memory while performing de-duplication but the type of value held can be controlled as follows:
-
hold field values directly (
map
) -
hold ordinals of the field as determined by the Lucene index (
global_ordinals
) -
hold hashes of the field values - with potential for hash collisions (
bytes_hash
)
The default setting is to use global_ordinals
if this information is available from the Lucene index and reverting to map
if not.
The bytes_hash
setting may prove faster in some cases but introduces the possibility of false positives in de-duplication logic due to the possibility of hash collisions.
Please note that Elasticsearch will ignore the choice of execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.
Limitations
editCannot be nested under breadth_first
aggregations
editBeing a quality-based filter the sampler aggregation needs access to the relevance score produced for each document.
It therefore cannot be nested under a terms
aggregation which has the collect_mode
switched from the default depth_first
mode to breadth_first
as this discards scores.
In this situation an error will be thrown.
Limited de-dup logic.
editThe de-duplication logic in the diversify settings applies only at a shard level so will not apply across shards.
No specialized syntax for geo/date fields
editCurrently the syntax for defining the diversifying values is defined by a choice of field
or
script
- there is no added syntactical sugar for expressing geo or date units such as "7d" (7
days). This support may be added in a later release and users will currently have to create these
sorts of values using a script.
On this page