Get anomaly records for an anomaly detection job
Added in 5.4.0
Records contain the detailed analytical results. They describe the anomalous activity that has been identified in the input data based on the detector configuration. There can be many anomaly records depending on the characteristics and size of the input data. In practice, there are often too many to be able to manually process them. The machine learning features therefore perform a sophisticated aggregation of the anomaly records into buckets. The number of record results depends on the number of anomalies found in each bucket, which relates to the number of time series being modeled and the number of detectors.
Path parameters
-
job_id
string Required Identifier for the anomaly detection job.
Query parameters
-
desc
boolean If true, the results are sorted in descending order.
-
end
string | number Returns records with timestamps earlier than this time. The default value means results are not limited to specific timestamps.
-
exclude_interim
boolean If
true
, the output excludes interim results. -
from
number Skips the specified number of records.
-
record_score
number Returns records with anomaly scores greater or equal than this value.
-
size
number Specifies the maximum number of records to obtain.
-
sort
string Specifies the sort field for the requested records.
-
start
string | number Returns records with timestamps after this time. The default value means results are not limited to specific timestamps.
Body
-
desc
boolean Refer to the description for the
desc
query parameter. end
string | number A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.
-
exclude_interim
boolean Refer to the description for the
exclude_interim
query parameter. -
page
object -
record_score
number Refer to the description for the
record_score
query parameter. -
sort
string Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
start
string | number A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.
curl \
--request POST 'http://api.example.com/_ml/anomaly_detectors/{job_id}/results/records' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"desc":true,"":"string","exclude_interim":true,"page":{"from":42.0,"size":42.0},"record_score":42.0,"sort":"string"}'
{
"desc": true,
"": "string",
"exclude_interim": true,
"page": {
"from": 42.0,
"size": 42.0
},
"record_score": 42.0,
"sort": "string"
}
{
"count": 42.0,
"records": [
{
"actual": [
42.0
],
"anomaly_score_explanation": {
"anomaly_characteristics_impact": 42.0,
"anomaly_length": 42.0,
"anomaly_type": "string",
"high_variance_penalty": true,
"incomplete_bucket_penalty": true,
"lower_confidence_bound": 42.0,
"multi_bucket_impact": 42.0,
"single_bucket_impact": 42.0,
"typical_value": 42.0,
"upper_confidence_bound": 42.0
},
"": 42.0,
"by_field_name": "string",
"by_field_value": "string",
"causes": [
{
"actual": [
42.0
],
"by_field_name": "string",
"by_field_value": "string",
"correlated_by_field_value": "string",
"field_name": "string",
"function": "string",
"function_description": "string",
"geo_results": {
"actual_point": "string",
"typical_point": "string"
},
"influencers": [
{}
],
"over_field_name": "string",
"over_field_value": "string",
"partition_field_name": "string",
"partition_field_value": "string",
"probability": 42.0,
"typical": [
42.0
]
}
],
"detector_index": 42.0,
"field_name": "string",
"function": "string",
"function_description": "string",
"geo_results": {
"actual_point": "string",
"typical_point": "string"
},
"influencers": [
{
"influencer_field_name": "string",
"influencer_field_values": [
"string"
]
}
],
"initial_record_score": 42.0,
"is_interim": true,
"job_id": "string",
"over_field_name": "string",
"over_field_value": "string",
"partition_field_name": "string",
"partition_field_value": "string",
"probability": 42.0,
"record_score": 42.0,
"result_type": "string",
"typical": [
42.0
]
}
]
}