- Elasticsearch Guide: other versions:
- What’s new in 8.17
- Elasticsearch basics
- Quick starts
- Set up Elasticsearch
- Run Elasticsearch locally
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Miscellaneous cluster settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Data stream lifecycle settings
- Field data cache settings
- Local gateway settings
- Health Diagnostic settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- Inference settings
- License settings
- Machine learning settings
- Monitoring settings
- Node settings
- Networking
- Node query cache settings
- Path settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Set JVM options
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Dynamic mapping
- Explicit mapping
- Runtime fields
- Field data types
- Aggregate metric
- Alias
- Arrays
- Binary
- Boolean
- Completion
- Date
- Date nanoseconds
- Dense vector
- Flattened
- Geopoint
- Geoshape
- Histogram
- IP
- Join
- Keyword
- Nested
- Numeric
- Object
- Pass-through object
- Percolator
- Point
- Range
- Rank feature
- Rank features
- Search-as-you-type
- Semantic text
- Shape
- Sparse vector
- Text
- Token count
- Unsigned long
- Version
- Metadata fields
- Mapping parameters
analyzer
coerce
copy_to
doc_values
dynamic
eager_global_ordinals
enabled
format
ignore_above
index.mapping.ignore_above
ignore_malformed
index
index_options
index_phrases
index_prefixes
meta
fields
normalizer
norms
null_value
position_increment_gap
properties
search_analyzer
similarity
store
subobjects
term_vector
- Mapping limit settings
- Removal of mapping types
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Attachment
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- Geo-grid
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- IP Location
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Redact
- Registered domain
- Remove
- Rename
- Reroute
- Script
- Set
- Set security user
- Sort
- Split
- Terminate
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Ingest pipelines in Search
- Aliases
- Search your data
- Re-ranking
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Frequent item sets
- Geo-distance
- Geohash grid
- Geohex grid
- Geotile grid
- Global
- Histogram
- IP prefix
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Random sampler
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Time series
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Change point
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- Geospatial analysis
- Connectors
- EQL
- ES|QL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Tutorial: Customize built-in policies
- Tutorial: Automate rollover
- Index management in Kibana
- Overview
- Concepts
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Data tiers
- Autoscaling
- Monitor a cluster
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled automatically
- Manually configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- User profiles
- Realms
- Realm chains
- Security domains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- JWT authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Looking up users without authentication
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Role restriction
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watcher
- Cross-cluster replication
- Data store architecture
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Behavioral Analytics APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat component templates
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Prevalidate node removal
- Nodes reload secure settings
- Nodes stats
- Cluster Info
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Create or update desired nodes
- Get desired nodes
- Delete desired nodes
- Get desired balance
- Reset desired balance
- Cross-cluster replication APIs
- Connector APIs
- Create connector
- Delete connector
- Get connector
- List connectors
- Update connector API key id
- Update connector configuration
- Update connector index name
- Update connector features
- Update connector filtering
- Update connector name and description
- Update connector pipeline
- Update connector scheduling
- Update connector service type
- Create connector sync job
- Cancel connector sync job
- Delete connector sync job
- Get connector sync job
- List connector sync jobs
- Check in a connector
- Update connector error
- Update connector last sync stats
- Update connector status
- Check in connector sync job
- Claim connector sync job
- Set connector sync job error
- Set connector sync job stats
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- ES|QL APIs
- Features APIs
- Fleet APIs
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Resolve cluster
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Inference APIs
- Delete inference API
- Get inference API
- Perform inference API
- Create inference API
- Stream inference API
- Update inference API
- AlibabaCloud AI Search inference service
- Amazon Bedrock inference service
- Anthropic inference service
- Azure AI studio inference service
- Azure OpenAI inference service
- Cohere inference service
- Elasticsearch inference service
- ELSER inference service
- Google AI Studio inference service
- Google Vertex AI inference service
- HuggingFace inference service
- Mistral inference service
- OpenAI inference service
- Watsonx inference service
- Info API
- Ingest APIs
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Clear trained model deployment cache
- Create or update trained model aliases
- Create part of a trained model
- Create trained models
- Create trained model vocabulary
- Delete trained model aliases
- Delete trained models
- Get trained models
- Get trained models stats
- Infer trained model
- Start trained model deployment
- Stop trained model deployment
- Update trained model deployment
- Migration APIs
- Node lifecycle APIs
- Query rules APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Root API
- Script APIs
- Search APIs
- Search Application APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Bulk create or update roles API
- Bulk delete roles API
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Query Role
- Get service accounts
- Get service account credentials
- Get Security settings
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- Query User
- Update API key
- Update Security settings
- Bulk update API keys
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Activate user profile
- Disable user profile
- Enable user profile
- Get user profiles
- Suggest user profile
- Update user profile data
- Has privileges user profile
- Create Cross-Cluster API key
- Update Cross-Cluster API key
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Synonyms APIs
- Text structure APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- Optimizations
- Troubleshooting
- Fix common cluster issues
- Diagnose unassigned shards
- Add a missing tier to the system
- Allow Elasticsearch to allocate the data in the system
- Allow Elasticsearch to allocate the index
- Indices mix index allocation filters with data tiers node roles to move through data tiers
- Not enough nodes to allocate all shard replicas
- Total number of shards for an index on a single node exceeded
- Total number of shards per node has been reached
- Troubleshooting corruption
- Fix data nodes out of disk
- Fix master nodes out of disk
- Fix other role nodes out of disk
- Start index lifecycle management
- Start Snapshot Lifecycle Management
- Restore from snapshot
- Troubleshooting broken repositories
- Addressing repeated snapshot policy failures
- Troubleshooting an unstable cluster
- Troubleshooting discovery
- Troubleshooting monitoring
- Troubleshooting transforms
- Troubleshooting Watcher
- Troubleshooting searches
- Troubleshooting shards capacity health issues
- Troubleshooting an unbalanced cluster
- Capture diagnostics
- Migration guide
- Release notes
- Elasticsearch version 8.17.1
- Elasticsearch version 8.17.0
- Elasticsearch version 8.16.2
- Elasticsearch version 8.16.1
- Elasticsearch version 8.16.0
- Elasticsearch version 8.15.5
- Elasticsearch version 8.15.4
- Elasticsearch version 8.15.3
- Elasticsearch version 8.15.2
- Elasticsearch version 8.15.1
- Elasticsearch version 8.15.0
- Elasticsearch version 8.14.3
- Elasticsearch version 8.14.2
- Elasticsearch version 8.14.1
- Elasticsearch version 8.14.0
- Elasticsearch version 8.13.4
- Elasticsearch version 8.13.3
- Elasticsearch version 8.13.2
- Elasticsearch version 8.13.1
- Elasticsearch version 8.13.0
- Elasticsearch version 8.12.2
- Elasticsearch version 8.12.1
- Elasticsearch version 8.12.0
- Elasticsearch version 8.11.4
- Elasticsearch version 8.11.3
- Elasticsearch version 8.11.2
- Elasticsearch version 8.11.1
- Elasticsearch version 8.11.0
- Elasticsearch version 8.10.4
- Elasticsearch version 8.10.3
- Elasticsearch version 8.10.2
- Elasticsearch version 8.10.1
- Elasticsearch version 8.10.0
- Elasticsearch version 8.9.2
- Elasticsearch version 8.9.1
- Elasticsearch version 8.9.0
- Elasticsearch version 8.8.2
- Elasticsearch version 8.8.1
- Elasticsearch version 8.8.0
- Elasticsearch version 8.7.1
- Elasticsearch version 8.7.0
- Elasticsearch version 8.6.2
- Elasticsearch version 8.6.1
- Elasticsearch version 8.6.0
- Elasticsearch version 8.5.3
- Elasticsearch version 8.5.2
- Elasticsearch version 8.5.1
- Elasticsearch version 8.5.0
- Elasticsearch version 8.4.3
- Elasticsearch version 8.4.2
- Elasticsearch version 8.4.1
- Elasticsearch version 8.4.0
- Elasticsearch version 8.3.3
- Elasticsearch version 8.3.2
- Elasticsearch version 8.3.1
- Elasticsearch version 8.3.0
- Elasticsearch version 8.2.3
- Elasticsearch version 8.2.2
- Elasticsearch version 8.2.1
- Elasticsearch version 8.2.0
- Elasticsearch version 8.1.3
- Elasticsearch version 8.1.2
- Elasticsearch version 8.1.1
- Elasticsearch version 8.1.0
- Elasticsearch version 8.0.1
- Elasticsearch version 8.0.0
- Elasticsearch version 8.0.0-rc2
- Elasticsearch version 8.0.0-rc1
- Elasticsearch version 8.0.0-beta1
- Elasticsearch version 8.0.0-alpha2
- Elasticsearch version 8.0.0-alpha1
- Dependencies and versions
Frequent item sets aggregation
editFrequent item sets aggregation
editA bucket aggregation which finds frequent item sets. It is a form of association rules mining that identifies items that often occur together. Items that are frequently purchased together or log events that tend to co-occur are examples of frequent item sets. Finding frequent item sets helps to discover relationships between different data points (items).
The aggregation reports closed item sets. A frequent item set is called closed
if no superset exists with the same ratio of documents (also known as its
support value). For example, we have the two
following candidates for a frequent item set, which have the same support value:
1. apple, orange, banana
2. apple, orange, banana, tomato
.
Only the second item set (apple, orange, banana, tomato
) is returned, and the
first set – which is a subset of the second one – is skipped. Both item sets
might be returned if their support values are different.
The runtime of the aggregation depends on the data and the provided parameters. It might take a significant time for the aggregation to complete. For this reason, it is recommended to use async search to run your requests asynchronously.
Syntax
editA frequent_item_sets
aggregation looks like this in isolation:
"frequent_item_sets": { "minimum_set_size": 3, "fields": [ {"field": "my_field_1"}, {"field": "my_field_2"} ] }
Table 51. frequent_item_sets
Parameters
Parameter Name |
Description |
Required |
Default Value |
|
(array) Fields to analyze. |
Required |
|
|
(integer) The minimum size of one item set. |
Optional |
|
|
(integer) The minimum support of one item set. |
Optional |
|
|
(integer) The number of top item sets to return. |
Optional |
|
|
(object) Query that filters documents from the analysis |
Optional |
|
Fields
editSupported field types for the analyzed fields are keyword, numeric, ip, date, and arrays of these types. You can also add runtime fields to your analyzed fields.
If the combined cardinality of the analyzed fields are high, the aggregation might require a significant amount of system resources.
You can filter the values for each field by using the include
and exclude
parameters. The parameters can be regular expression strings or arrays of
strings of exact terms. The filtered values are removed from the analysis and
therefore reduce the runtime. If both include
and exclude
are defined,
exclude
takes precedence; it means include
is evaluated first and then
exclude
.
Minimum set size
editThe minimum set size is the minimum number of items the set needs to contain. A
value of 1 returns the frequency of single items. Only item sets that contain at
least the number of minimum_set_size
items are returned. For example, the item
set orange, banana, apple
is returned only if the minimum set size is 3 or
lower.
Minimum support
editThe minimum support value is the ratio of documents that an item set must exist in to be considered "frequent". In particular, it is a normalized value between 0 and 1. It is calculated by dividing the number of documents containing the item set by the total number of documents.
For example, if a given item set is contained by five documents and the total
number of documents is 20, then the support of the item set is 5/20 = 0.25.
Therefore, this set is returned only if the minimum support is 0.25 or lower.
As a higher minimum support prunes more items, the calculation is less resource
intensive. The minimum_support
parameter has an effect on the required memory
and the runtime of the aggregation.
Size
editThis parameter defines the maximum number of item sets to return. The result contains top-k item sets; the item sets with the highest support values. This parameter has a significant effect on the required memory and the runtime of the aggregation.
Filter
editA query to filter documents to use as part of the analysis. Documents that don’t match the filter are ignored when generating the item sets, however still count when calculating the support of an item set.
Use the filter if you want to narrow the item set analysis to fields of interest. Use a top-level query to filter the data set.
Examples
editIn the following examples, we use the e-commerce Kibana sample data set.
Aggregation with two analyzed fields and an exclude
parameter
editIn the first example, the goal is to find out based on transaction data (1.)
from what product categories the customers purchase products frequently together
and (2.) from which cities they make those purchases. We want to exclude results
where location information is not available (where the city name is other
).
Finally, we are interested in sets with three or more items, and want to see the
first three frequent item sets with the highest support.
Note that we use the async search endpoint in this first example.
resp = client.async_search.submit( index="kibana_sample_data_ecommerce", size=0, aggs={ "my_agg": { "frequent_item_sets": { "minimum_set_size": 3, "fields": [ { "field": "category.keyword" }, { "field": "geoip.city_name", "exclude": "other" } ], "size": 3 } } }, ) print(resp)
const response = await client.asyncSearch.submit({ index: "kibana_sample_data_ecommerce", size: 0, aggs: { my_agg: { frequent_item_sets: { minimum_set_size: 3, fields: [ { field: "category.keyword", }, { field: "geoip.city_name", exclude: "other", }, ], size: 3, }, }, }, }); console.log(response);
POST /kibana_sample_data_ecommerce/_async_search { "size":0, "aggs":{ "my_agg":{ "frequent_item_sets":{ "minimum_set_size":3, "fields":[ { "field":"category.keyword" }, { "field":"geoip.city_name", "exclude":"other" } ], "size":3 } } } }
The response of the API call above contains an identifier (id
) of the async
search request. You can use the identifier to retrieve the search results:
resp = client.async_search.get( id="<id>", ) print(resp)
const response = await client.asyncSearch.get({ id: "<id>", }); console.log(response);
GET /_async_search/<id>
The API returns a response similar to the following one:
(...) "aggregations" : { "my_agg" : { "buckets" : [ { "key" : { "category.keyword" : [ "Women's Clothing", "Women's Shoes" ], "geoip.city_name" : [ "New York" ] }, "doc_count" : 217, "support" : 0.04641711229946524 }, { "key" : { "category.keyword" : [ "Women's Clothing", "Women's Accessories" ], "geoip.city_name" : [ "New York" ] }, "doc_count" : 135, "support" : 0.028877005347593583 }, { "key" : { "category.keyword" : [ "Men's Clothing", "Men's Shoes" ], "geoip.city_name" : [ "Cairo" ] }, "doc_count" : 123, "support" : 0.026310160427807486 } ], (...) } }
The array of returned item sets. |
|
The |
|
The number of documents that contain the item set. |
|
The support value of the item set. It is calculated by dividing the number of documents containing the item set by the total number of documents. |
The response shows that the categories customers purchase from most frequently
together are Women's Clothing
and Women's Shoes
and customers from New York
tend to buy items from these categories frequently together. In other words,
customers who buy products labelled Women's Clothing
more likely buy products
also from the Women's Shoes
category and customers from New York most likely
buy products from these categories together. The item set with the second
highest support is Women's Clothing
and Women's Accessories
with customers
mostly from New York. Finally, the item set with the third highest support is
Men's Clothing
and Men's Shoes
with customers mostly from Cairo.
Aggregation with two analyzed fields and a filter
editWe take the first example, but want to narrow the item sets to places in Europe.
For that, we add a filter, and this time, we don’t use the exclude
parameter:
resp = client.async_search.submit( index="kibana_sample_data_ecommerce", size=0, aggs={ "my_agg": { "frequent_item_sets": { "minimum_set_size": 3, "fields": [ { "field": "category.keyword" }, { "field": "geoip.city_name" } ], "size": 3, "filter": { "term": { "geoip.continent_name": "Europe" } } } } }, ) print(resp)
const response = await client.asyncSearch.submit({ index: "kibana_sample_data_ecommerce", size: 0, aggs: { my_agg: { frequent_item_sets: { minimum_set_size: 3, fields: [ { field: "category.keyword", }, { field: "geoip.city_name", }, ], size: 3, filter: { term: { "geoip.continent_name": "Europe", }, }, }, }, }, }); console.log(response);
POST /kibana_sample_data_ecommerce/_async_search { "size": 0, "aggs": { "my_agg": { "frequent_item_sets": { "minimum_set_size": 3, "fields": [ { "field": "category.keyword" }, { "field": "geoip.city_name" } ], "size": 3, "filter": { "term": { "geoip.continent_name": "Europe" } } } } } }
The result will only show item sets that created from documents matching the
filter, namely purchases in Europe. Using filter
, the calculated support
still takes all purchases into acount. That’s different than specifying a query
at the top-level, in which case support
gets calculated only from purchases in
Europe.
Analyzing numeric values by using a runtime field
editThe frequent items aggregation enables you to bucket numeric values by using
runtime fields. The next example demonstrates how to use a script to
add a runtime field to your documents called price_range
, which is
calculated from the taxful total price of the individual transactions. The
runtime field then can be used in the frequent items aggregation as a field to
analyze.
resp = client.search( index="kibana_sample_data_ecommerce", runtime_mappings={ "price_range": { "type": "keyword", "script": { "source": "\n def bucket_start = (long) Math.floor(doc['taxful_total_price'].value / 50) * 50;\n def bucket_end = bucket_start + 50;\n emit(bucket_start.toString() + \"-\" + bucket_end.toString());\n " } } }, size=0, aggs={ "my_agg": { "frequent_item_sets": { "minimum_set_size": 4, "fields": [ { "field": "category.keyword" }, { "field": "price_range" }, { "field": "geoip.city_name" } ], "size": 3 } } }, ) print(resp)
const response = await client.search({ index: "kibana_sample_data_ecommerce", runtime_mappings: { price_range: { type: "keyword", script: { source: "\n def bucket_start = (long) Math.floor(doc['taxful_total_price'].value / 50) * 50;\n def bucket_end = bucket_start + 50;\n emit(bucket_start.toString() + \"-\" + bucket_end.toString());\n ", }, }, }, size: 0, aggs: { my_agg: { frequent_item_sets: { minimum_set_size: 4, fields: [ { field: "category.keyword", }, { field: "price_range", }, { field: "geoip.city_name", }, ], size: 3, }, }, }, }); console.log(response);
GET kibana_sample_data_ecommerce/_search { "runtime_mappings": { "price_range": { "type": "keyword", "script": { "source": """ def bucket_start = (long) Math.floor(doc['taxful_total_price'].value / 50) * 50; def bucket_end = bucket_start + 50; emit(bucket_start.toString() + "-" + bucket_end.toString()); """ } } }, "size": 0, "aggs": { "my_agg": { "frequent_item_sets": { "minimum_set_size": 4, "fields": [ { "field": "category.keyword" }, { "field": "price_range" }, { "field": "geoip.city_name" } ], "size": 3 } } } }
The API returns a response similar to the following one:
(...) "aggregations" : { "my_agg" : { "buckets" : [ { "key" : { "category.keyword" : [ "Women's Clothing", "Women's Shoes" ], "price_range" : [ "50-100" ], "geoip.city_name" : [ "New York" ] }, "doc_count" : 100, "support" : 0.0213903743315508 }, { "key" : { "category.keyword" : [ "Women's Clothing", "Women's Shoes" ], "price_range" : [ "50-100" ], "geoip.city_name" : [ "Dubai" ] }, "doc_count" : 59, "support" : 0.012620320855614974 }, { "key" : { "category.keyword" : [ "Men's Clothing", "Men's Shoes" ], "price_range" : [ "50-100" ], "geoip.city_name" : [ "Marrakesh" ] }, "doc_count" : 53, "support" : 0.011336898395721925 } ], (...) } }
The response shows the categories that customers purchase from most frequently together, the location of the customers who tend to buy items from these categories, and the most frequent price ranges of these purchases.
On this page