Getting started with rollups
editGetting started with rollups
editDeprecated in 8.11.0.
Rollups will be removed in a future version. Please migrate to downsampling instead.
From 8.15.0 invoking the put job API in a cluster with no rollup usage will fail with a message about Rollup’s deprecation and planned removal. A cluster either needs to contain a rollup job or a rollup index in order for the put job API to be allowed to execute.
To use the Rollup feature, you need to create one or more "Rollup Jobs". These jobs run continuously in the background and rollup the index or indices that you specify, placing the rolled documents in a secondary index (also of your choosing).
Imagine you have a series of daily indices that hold sensor data (sensor-2017-01-01
, sensor-2017-01-02
, etc). A sample document might
look like this:
{ "timestamp": 1516729294000, "temperature": 200, "voltage": 5.2, "node": "a" }
Creating a rollup job
editWe’d like to rollup these documents into hourly summaries, which will allow us to generate reports and dashboards with any time interval one hour or greater. A rollup job might look like this:
resp = client.rollup.put_job( id="sensor", index_pattern="sensor-*", rollup_index="sensor_rollup", cron="*/30 * * * * ?", page_size=1000, groups={ "date_histogram": { "field": "timestamp", "fixed_interval": "60m" }, "terms": { "fields": [ "node" ] } }, metrics=[ { "field": "temperature", "metrics": [ "min", "max", "sum" ] }, { "field": "voltage", "metrics": [ "avg" ] } ], ) print(resp)
const response = await client.rollup.putJob({ id: "sensor", index_pattern: "sensor-*", rollup_index: "sensor_rollup", cron: "*/30 * * * * ?", page_size: 1000, groups: { date_histogram: { field: "timestamp", fixed_interval: "60m", }, terms: { fields: ["node"], }, }, metrics: [ { field: "temperature", metrics: ["min", "max", "sum"], }, { field: "voltage", metrics: ["avg"], }, ], }); console.log(response);
PUT _rollup/job/sensor { "index_pattern": "sensor-*", "rollup_index": "sensor_rollup", "cron": "*/30 * * * * ?", "page_size": 1000, "groups": { "date_histogram": { "field": "timestamp", "fixed_interval": "60m" }, "terms": { "fields": [ "node" ] } }, "metrics": [ { "field": "temperature", "metrics": [ "min", "max", "sum" ] }, { "field": "voltage", "metrics": [ "avg" ] } ] }
We give the job the ID of "sensor" (in the url: PUT _rollup/job/sensor
), and tell it to rollup the index pattern "sensor-*"
.
This job will find and rollup any index that matches that pattern. Rollup summaries are then stored in the "sensor_rollup"
index.
The cron
parameter controls when and how often the job activates. When a rollup job’s cron schedule triggers, it will begin rolling up
from where it left off after the last activation. So if you configure the cron to run every 30 seconds, the job will process the last 30
seconds worth of data that was indexed into the sensor-*
indices.
If instead the cron was configured to run once a day at midnight, the job would process the last 24 hours worth of data. The choice is largely preference, based on how "realtime" you want the rollups, and if you wish to process continuously or move it to off-peak hours.
Next, we define a set of groups
. Essentially, we are defining the dimensions
that we wish to pivot on at a later date when querying the data. The grouping in
this job allows us to use date_histogram
aggregations on the timestamp
field,
rolled up at hourly intervals. It also allows us to run terms aggregations on
the node
field.
After defining which groups should be generated for the data, you next configure
which metrics should be collected. By default, only the doc_counts
are
collected for each group. To make rollup useful, you will often add metrics
like averages, mins, maxes, etc. In this example, the metrics are fairly
straightforward: we want to save the min/max/sum of the temperature
field, and the average of the voltage
field.
For more details about the job syntax, see Create rollup jobs.
After you execute the above command and create the job, you’ll receive the following response:
{ "acknowledged": true }
Starting the job
editAfter the job is created, it will be sitting in an inactive state. Jobs need to be started before they begin processing data (this allows you to stop them later as a way to temporarily pause, without deleting the configuration).
To start the job, execute this command:
resp = client.rollup.start_job( id="sensor", ) print(resp)
response = client.rollup.start_job( id: 'sensor' ) puts response
const response = await client.rollup.startJob({ id: "sensor", }); console.log(response);
POST _rollup/job/sensor/_start
Searching the rolled results
editAfter the job has run and processed some data, we can use the Rollup search endpoint to do some searching. The Rollup feature is designed so that you can use the same Query DSL syntax that you are accustomed to… it just happens to run on the rolled up data instead.
For example, take this query:
resp = client.rollup.rollup_search( index="sensor_rollup", size=0, aggregations={ "max_temperature": { "max": { "field": "temperature" } } }, ) print(resp)
response = client.rollup.rollup_search( index: 'sensor_rollup', body: { size: 0, aggregations: { max_temperature: { max: { field: 'temperature' } } } } ) puts response
const response = await client.rollup.rollupSearch({ index: "sensor_rollup", size: 0, aggregations: { max_temperature: { max: { field: "temperature", }, }, }, }); console.log(response);
GET /sensor_rollup/_rollup_search { "size": 0, "aggregations": { "max_temperature": { "max": { "field": "temperature" } } } }
It’s a simple aggregation that calculates the maximum of the temperature
field. But you’ll notice that it is being sent to the sensor_rollup
index instead of the raw sensor-*
indices. And you’ll also notice that it is using the _rollup_search
endpoint. Otherwise the syntax
is exactly as you’d expect.
If you were to execute that query, you’d receive a result that looks like a normal aggregation response:
{ "took" : 102, "timed_out" : false, "terminated_early" : false, "_shards" : ... , "hits" : { "total" : { "value": 0, "relation": "eq" }, "max_score" : 0.0, "hits" : [ ] }, "aggregations" : { "max_temperature" : { "value" : 202.0 } } }
The only notable difference is that Rollup search results have zero hits
, because we aren’t really searching the original, live data any
more. Otherwise it’s identical syntax.
There are a few interesting takeaways here. Firstly, even though the data was rolled up with hourly intervals and partitioned by
node name, the query we ran is just calculating the max temperature across all documents. The groups
that were configured in the job
are not mandatory elements of a query, they are just extra dimensions you can partition on. Second, the request and response syntax
is nearly identical to normal DSL, making it easy to integrate into dashboards and applications.
Finally, we can use those grouping fields we defined to construct a more complicated query:
resp = client.rollup.rollup_search( index="sensor_rollup", size=0, aggregations={ "timeline": { "date_histogram": { "field": "timestamp", "fixed_interval": "7d" }, "aggs": { "nodes": { "terms": { "field": "node" }, "aggs": { "max_temperature": { "max": { "field": "temperature" } }, "avg_voltage": { "avg": { "field": "voltage" } } } } } } }, ) print(resp)
response = client.rollup.rollup_search( index: 'sensor_rollup', body: { size: 0, aggregations: { timeline: { date_histogram: { field: 'timestamp', fixed_interval: '7d' }, aggregations: { nodes: { terms: { field: 'node' }, aggregations: { max_temperature: { max: { field: 'temperature' } }, avg_voltage: { avg: { field: 'voltage' } } } } } } } } ) puts response
const response = await client.rollup.rollupSearch({ index: "sensor_rollup", size: 0, aggregations: { timeline: { date_histogram: { field: "timestamp", fixed_interval: "7d", }, aggs: { nodes: { terms: { field: "node", }, aggs: { max_temperature: { max: { field: "temperature", }, }, avg_voltage: { avg: { field: "voltage", }, }, }, }, }, }, }, }); console.log(response);
GET /sensor_rollup/_rollup_search { "size": 0, "aggregations": { "timeline": { "date_histogram": { "field": "timestamp", "fixed_interval": "7d" }, "aggs": { "nodes": { "terms": { "field": "node" }, "aggs": { "max_temperature": { "max": { "field": "temperature" } }, "avg_voltage": { "avg": { "field": "voltage" } } } } } } } }
Which returns a corresponding response:
{ "took" : 93, "timed_out" : false, "terminated_early" : false, "_shards" : ... , "hits" : { "total" : { "value": 0, "relation": "eq" }, "max_score" : 0.0, "hits" : [ ] }, "aggregations" : { "timeline" : { "buckets" : [ { "key_as_string" : "2018-01-18T00:00:00.000Z", "key" : 1516233600000, "doc_count" : 6, "nodes" : { "doc_count_error_upper_bound" : 0, "sum_other_doc_count" : 0, "buckets" : [ { "key" : "a", "doc_count" : 2, "max_temperature" : { "value" : 202.0 }, "avg_voltage" : { "value" : 5.1499998569488525 } }, { "key" : "b", "doc_count" : 2, "max_temperature" : { "value" : 201.0 }, "avg_voltage" : { "value" : 5.700000047683716 } }, { "key" : "c", "doc_count" : 2, "max_temperature" : { "value" : 202.0 }, "avg_voltage" : { "value" : 4.099999904632568 } } ] } } ] } } }
In addition to being more complicated (date histogram and a terms aggregation, plus an additional average metric), you’ll notice
the date_histogram uses a 7d
interval instead of 60m
.
Conclusion
editThis quickstart should have provided a concise overview of the core functionality that Rollup exposes. There are more tips and things to consider when setting up Rollups, which you can find throughout the rest of this section. You may also explore the REST API for an overview of what is available.