Create a rollup job Technical preview
WARNING: From 8.15.0, calling this API in a cluster with no rollup usage will fail with a message about the deprecation and planned removal of rollup features. A cluster needs to contain either a rollup job or a rollup index in order for this API to be allowed to run.
The rollup job configuration contains all the details about how the job should run, when it indexes documents, and what future queries will be able to run against the rollup index.
There are three main sections to the job configuration: the logistical details about the job (for example, the cron schedule), the fields that are used for grouping, and what metrics to collect for each group.
Jobs are created in a STOPPED
state. You can start them with the start rollup jobs API.
Path parameters
-
Identifier for the rollup job. This can be any alphanumeric string and uniquely identifies the data that is associated with the rollup job. The ID is persistent; it is stored with the rolled up data. If you create a job, let it run for a while, then delete the job, the data that the job rolled up is still be associated with this job ID. You cannot create a new job with the same ID since that could lead to problems with mismatched job configurations.
Body Required
-
A cron string which defines the intervals when the rollup job should be executed. When the interval triggers, the indexer attempts to rollup the data in the index pattern. The cron pattern is unrelated to the time interval of the data being rolled up. For example, you may wish to create hourly rollups of your document but to only run the indexer on a daily basis at midnight, as defined by the cron. The cron pattern is defined just like a Watcher cron schedule.
-
Additional properties are allowed.
-
The index or index pattern to roll up. Supports wildcard-style patterns (
logstash-*
). The job attempts to rollup the entire index or index-pattern. -
metrics array[object]
Defines the metrics to collect for each grouping tuple. By default, only the doc_counts are collected for each group. To make rollup useful, you will often add metrics like averages, mins, maxes, etc. Metrics are defined on a per-field basis and for each field you configure which metric should be collected.
-
The number of bucket results that are processed on each iteration of the rollup indexer. A larger value tends to execute faster, but requires more memory during processing. This value has no effect on how the data is rolled up; it is merely used for tweaking the speed or memory cost of the indexer.
-
timeout string
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
headers object
curl \
-X PUT http://api.example.com/_rollup/job/{id} \
-H "Content-Type: application/json" \
-d '{"cron":"string","groups":{"date_histogram":{"delay":"string","field":"string","format":"string","interval":"string","calendar_interval":"string","fixed_interval":"string","time_zone":"string"},"histogram":{"fields":"string","interval":42.0},"terms":{"fields":"string"}},"index_pattern":"string","metrics":[{"field":"string","metrics":["min"]}],"page_size":42.0,"rollup_index":"string","timeout":"string","headers":{}}'
{
"cron": "string",
"groups": {
"date_histogram": {
"delay": "string",
"field": "string",
"format": "string",
"interval": "string",
"calendar_interval": "string",
"fixed_interval": "string",
"time_zone": "string"
},
"histogram": {
"fields": "string",
"interval": 42.0
},
"terms": {
"fields": "string"
}
},
"index_pattern": "string",
"metrics": [
{
"field": "string",
"metrics": [
"min"
]
}
],
"page_size": 42.0,
"rollup_index": "string",
"timeout": "string",
"headers": {}
}
{
"acknowledged": true
}