Downsample index API
editDownsample index API
editAggregates a time series (TSDS) index and stores
pre-computed statistical summaries (min
, max
, sum
, value_count
and
avg
) for each metric field grouped by a configured time interval. For example,
a TSDS index that contains metrics sampled every 10 seconds can be downsampled
to an hourly index. All documents within an hour interval are summarized and
stored as a single document in the downsample index.
response = client.indices.downsample( index: 'my-time-series-index', target_index: 'my-downsampled-time-series-index', body: { fixed_interval: '1d' } ) puts response
POST /my-time-series-index/_downsample/my-downsampled-time-series-index { "fixed_interval": "1d" }
Request
editPOST /<source-index>/_downsample/<output-downsampled-index>
Prerequisites
edit- Only indices in a time series data stream are supported.
-
If the Elasticsearch security features are enabled, you must have the
all
ormanage
index privilege for the data stream. - Neither field nor document level security can be defined on the source index.
-
The source index must be read only (
index.blocks.write: true
).
Path parameters
edit-
<source-index>
- (Optional, string) Name of the time series index to downsample.
-
<output-downsampled_index>
-
(Required, string) Name of the index to create.
Index names must meet the following criteria:
- Lowercase only
-
Cannot include
\
,/
,*
,?
,"
,<
,>
,|
, ` ` (space character),,
,#
-
Indices prior to 7.0 could contain a colon (
:
), but that’s been deprecated and won’t be supported in 7.0+ -
Cannot start with
-
,_
,+
-
Cannot be
.
or..
- Cannot be longer than 255 bytes (note it is bytes, so multi-byte characters will count towards the 255 limit faster)
-
Names starting with
.
are deprecated, except for hidden indices and internal indices managed by plugins
Query parameters
edit-
fixed_interval
-
(Required, time units) The interval at which to aggregate the original time series index. For example,
60m
produces a document for each 60 minute (hourly) interval. This follows standard time formatting syntax as used elsewhere in Elasticsearch.Smaller, more granular intervals take up proportionally more space.
The downsampling process
editThe downsampling operation traverses the source TSDS index and performs the following steps:
-
Creates a new document for each value of the
_tsid
field and each@timestamp
value, rounded to thefixed_interval
defined in the downsample configuration. - For each new document, copies all time series dimensions from the source index to the target index. Dimensions in a TSDS are constant, so this is done only once per bucket.
-
For each time series metric field, computes aggregations for all documents in the bucket. Depending on the metric type of each metric field a different set of pre-aggregated results is stored:
-
gauge
: Themin
,max
,sum
, andvalue_count
are stored;value_count
is stored as typeaggregate_metric_double
. -
counter
: Thelast_value
is stored.
-
- For all other fields, the most recent value is copied to the target index.
Source and target index field mappings
editFields in the target, downsampled index are created based on fields in the original source index, as follows:
-
All fields mapped with the
time-series-dimension
parameter are created in the target downsample index with the same mapping as in the source index. -
All fields mapped with the
time_series_metric
parameter are created in the target downsample index with the same mapping as in the source index. An exception is that for fields mapped astime_series_metric: gauge
the field type is changed toaggregate_metric_double
. - All other fields that are neither dimensions nor metrics (that is, label fields), are created in the target downsample index with the same mapping that they had in the source index.
Check the Downsampling documentation for an overview and examples of running downsampling manually and as part of an ILM policy.