Paginate search results
editPaginate search results
editBy default, the search API returns the top 10 matching documents.
To paginate through a larger set of results, you can use the search API’s size
and from
parameters. The size
parameter is the number of matching documents
to return. The from
parameter is a zero-indexed offset from the beginning of
the complete result set that indicates the document you want to start with.
The following search API request sets the from
offset to 5
, meaning the
request offsets, or skips, the first five matching documents.
The size
parameter is 20
, meaning the request can return up to 20 documents,
starting at the offset.
GET /_search { "from": 5, "size": 20, "query": { "match": { "user.id": "kimchy" } } }
By default, you cannot page through more than 10,000 documents using the from
and size
parameters. This limit is set using the
index.max_result_window
index setting.
Deep paging or requesting many results at once can result in slow searches. Results are sorted before being returned. Because search requests usually span multiple shards, each shard must generate its own sorted results. These separate results must then be combined and sorted to ensure that the overall sort order is correct.
As an alternative to deep paging, we recommend using
scroll or the
search_after
parameter.
Elasticsearch uses Lucene’s internal doc IDs as tie-breakers. These internal doc IDs can be completely different across replicas of the same data. When paginating, you might occasionally see that documents with the same sort values are not ordered consistently.
Scroll search results
editWhile a search
request returns a single “page” of results, the scroll
API can be used to retrieve large numbers of results (or even all results)
from a single search request, in much the same way as you would use a cursor
on a traditional database.
Scrolling is not intended for real time user requests, but rather for processing large amounts of data, e.g. in order to reindex the contents of one data stream or index into a new data stream or index with a different configuration.
The results that are returned from a scroll request reflect the state of
the data stream or index at the time that the initial search
request was made, like a
snapshot in time. Subsequent changes to documents (index, update or delete)
will only affect later search requests.
In order to use scrolling, the initial search request should specify the
scroll
parameter in the query string, which tells Elasticsearch how long it
should keep the “search context” alive (see Keeping the search context alive), eg ?scroll=1m
.
POST /my-index-000001/_search?scroll=1m { "size": 100, "query": { "match": { "message": "foo" } } }
The result from the above request includes a _scroll_id
, which should
be passed to the scroll
API in order to retrieve the next batch of
results.
POST /_search/scroll { "scroll" : "1m", "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" }
|
|
The |
|
The |
The size
parameter allows you to configure the maximum number of hits to be
returned with each batch of results. Each call to the scroll
API returns the
next batch of results until there are no more results left to return, ie the
hits
array is empty.
The initial search request and each subsequent scroll request each
return a _scroll_id
. While the _scroll_id
may change between requests, it doesn’t
always change — in any case, only the most recently received _scroll_id
should be used.
If the request specifies aggregations, only the initial search response will contain the aggregations results.
Scroll requests have optimizations that make them faster when the sort
order is _doc
. If you want to iterate over all documents regardless of the
order, this is the most efficient option:
GET /_search?scroll=1m { "sort": [ "_doc" ] }
Keeping the search context alive
editA scroll returns all the documents which matched the search at the time of the
initial search request. It ignores any subsequent changes to these documents.
The scroll_id
identifies a search context which keeps track of everything
that Elasticsearch needs to return the correct documents. The search context is created
by the initial request and kept alive by subsequent requests.
The scroll
parameter (passed to the search
request and to every scroll
request) tells Elasticsearch how long it should keep the search context alive.
Its value (e.g. 1m
, see Time units) does not need to be long enough to
process all data — it just needs to be long enough to process the previous
batch of results. Each scroll
request (with the scroll
parameter) sets a
new expiry time. If a scroll
request doesn’t pass in the scroll
parameter, then the search context will be freed as part of that scroll
request.
Normally, the background merge process optimizes the index by merging together smaller segments to create new, bigger segments. Once the smaller segments are no longer needed they are deleted. This process continues during scrolling, but an open search context prevents the old segments from being deleted since they are still in use.
Keeping older segments alive means that more disk space and file handles are needed. Ensure that you have configured your nodes to have ample free file handles. See File Descriptors.
Additionally, if a segment contains deleted or updated documents then the search context must keep track of whether each document in the segment was live at the time of the initial search request. Ensure that your nodes have sufficient heap space if you have many open scrolls on an index that is subject to ongoing deletes or updates.
To prevent against issues caused by having too many scrolls open, the
user is not allowed to open scrolls past a certain limit. By default, the
maximum number of open scrolls is 500. This limit can be updated with the
search.max_open_scroll_context
cluster setting.
You can check how many search contexts are open with the nodes stats API:
GET /_nodes/stats/indices/search
Clear scroll
editSearch context are automatically removed when the scroll
timeout has been
exceeded. However keeping scrolls open has a cost, as discussed in the
previous section so scrolls should be explicitly
cleared as soon as the scroll is not being used anymore using the
clear-scroll
API:
DELETE /_search/scroll { "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" }
Multiple scroll IDs can be passed as array:
DELETE /_search/scroll { "scroll_id" : [ "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB" ] }
All search contexts can be cleared with the _all
parameter:
DELETE /_search/scroll/_all
The scroll_id
can also be passed as a query string parameter or in the request body.
Multiple scroll IDs can be passed as comma separated values:
DELETE /_search/scroll/DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB
Sliced scroll
editFor scroll queries that return a lot of documents it is possible to split the scroll in multiple slices which can be consumed independently:
GET /my-index-000001/_search?scroll=1m { "slice": { "id": 0, "max": 2 }, "query": { "match": { "message": "foo" } } } GET /my-index-000001/_search?scroll=1m { "slice": { "id": 1, "max": 2 }, "query": { "match": { "message": "foo" } } }
The result from the first request returned documents that belong to the first slice (id: 0) and the result from the
second request returned documents that belong to the second slice. Since the maximum number of slices is set to 2
the union of the results of the two requests is equivalent to the results of a scroll query without slicing.
By default the splitting is done on the shards first and then locally on each shard using the _id field
with the following formula:
slice(doc) = floorMod(hashCode(doc._id), max)
For instance if the number of shards is equal to 2 and the user requested 4 slices then the slices 0 and 2 are assigned
to the first shard and the slices 1 and 3 are assigned to the second shard.
Each scroll is independent and can be processed in parallel like any scroll request.
If the number of slices is bigger than the number of shards the slice filter is very slow on the first calls, it has a complexity of O(N) and a memory cost equals to N bits per slice where N is the total number of documents in the shard. After few calls the filter should be cached and subsequent calls should be faster but you should limit the number of sliced query you perform in parallel to avoid the memory explosion.
To avoid this cost entirely it is possible to use the doc_values
of another field to do the slicing
but the user must ensure that the field has the following properties:
- The field is numeric.
-
doc_values
are enabled on that field - Every document should contain a single value. If a document has multiple values for the specified field, the first value is used.
- The value for each document should be set once when the document is created and never updated. This ensures that each slice gets deterministic results.
- The cardinality of the field should be high. This ensures that each slice gets approximately the same amount of documents.
GET /my-index-000001/_search?scroll=1m { "slice": { "field": "@timestamp", "id": 0, "max": 10 }, "query": { "match": { "message": "foo" } } }
For append only time-based indices, the timestamp
field can be used safely.
By default the maximum number of slices allowed per scroll is limited to 1024.
You can update the index.max_slices_per_scroll
index setting to bypass this limit.
Search after
editPagination of results can be done by using the from
and size
but the cost becomes prohibitive when the deep pagination is reached.
The index.max_result_window
which defaults to 10,000 is a safeguard, search requests take heap memory and time proportional to from + size
.
The scroll API is recommended for efficient deep scrolling but scroll contexts are costly and it is not
recommended to use it for real time user requests.
The search_after
parameter circumvents this problem by providing a live cursor.
The idea is to use the results from the previous page to help the retrieval of the next page.
Suppose that the query to retrieve the first page looks like this:
GET my-index-000001/_search { "size": 10, "query": { "match" : { "message" : "foo" } }, "sort": [ {"@timestamp": "asc"}, {"tie_breaker_id": "asc"} ] }
A field with one unique value per document should be used as the tiebreaker
of the sort specification. Otherwise the sort order for documents that have
the same sort values would be undefined and could lead to missing or duplicate
results. The _id
field has a unique value per document
but it is not recommended to use it as a tiebreaker directly.
Beware that search_after
looks for the first document which fully or partially
matches tiebreaker’s provided value. Therefore if a document has a tiebreaker value of
"654323"
and you search_after
for "654"
it would still match that document
and return results found after it.
doc value are disabled on this field so sorting on it requires
to load a lot of data in memory. Instead it is advised to duplicate (client side
or with a set ingest processor) the content
of the _id
field in another field that has
doc value enabled and to use this new field as the tiebreaker
for the sort.
The result from the above request includes an array of sort values
for each document.
These sort values
can be used in conjunction with the search_after
parameter to start returning results "after" any
document in the result list.
For instance we can use the sort values
of the last document and pass it to search_after
to retrieve the next page of results:
GET my-index-000001/_search { "size": 10, "query": { "match" : { "message" : "foo" } }, "search_after": [1463538857, "654323"], "sort": [ {"@timestamp": "asc"}, {"tie_breaker_id": "asc"} ] }
The parameter from
must be set to 0 (or -1) when search_after
is used.
search_after
is not a solution to jump freely to a random page but rather to scroll many queries in parallel.
It is very similar to the scroll
API but unlike it, the search_after
parameter is stateless, it is always resolved against the latest
version of the searcher. For this reason the sort order may change during a walk depending on the updates and deletes of your index.