- X-Pack Reference for 6.0-6.2 and 5.x:
- Introduction
- Installing X-Pack
- Migrating to X-Pack
- Breaking Changes
- Securing Elasticsearch and Kibana
- Monitoring the Elastic Stack
- Alerting on Cluster and Index Events
- Reporting from Kibana
- Graphing Connections in Your Data
- Profiling your Queries and Aggregations
- Machine Learning in the Elastic Stack
- X-Pack Settings
- X-Pack APIs
- Info API
- Security APIs
- Watcher APIs
- Graph APIs
- Machine Learning APIs
- Close Jobs
- Create Datafeeds
- Create Jobs
- Delete Datafeeds
- Delete Jobs
- Delete Model Snapshots
- Flush Jobs
- Get Buckets
- Get Categories
- Get Datafeeds
- Get Datafeed Statistics
- Get Influencers
- Get Jobs
- Get Job Statistics
- Get Model Snapshots
- Get Records
- Open Jobs
- Post Data to Jobs
- Preview Datafeeds
- Revert Model Snapshots
- Start Datafeeds
- Stop Datafeeds
- Update Datafeeds
- Update Jobs
- Update Model Snapshots
- Validate Detectors
- Validate Jobs
- Definitions
- Troubleshooting
- Limitations
- License Management
- Release Notes
WARNING: Version 5.4 of the Elastic Stack has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Create Jobs
editCreate Jobs
editThe create job API enables you to instantiate a job.
Request
editPUT _xpack/ml/anomaly_detectors/<job_id>
Path Parameters
edit-
job_id
(required) - (string) Identifier for the job
Request Body
edit-
analysis_config
- (object) The analysis configuration, which specifies how to analyze the data. See analysis configuration objects.
-
analysis_limits
- Optionally specifies runtime limits for the job. See analysis limits.
-
background_persist_interval
- (time units) Advanced configuration option. The time between each periodic persistence of the model. See Job Resources.
-
custom_settings
- (object) Advanced configuration option. Contains custom meta data about the job. See Job Resources.
-
data_description
(required) -
(object) Describes the format of the input data. This object is required, but
it can be empty (
{}
). See data description objects. -
description
- (string) A description of the job.
-
model_plot_config
- (object) Advanced configuration option. Specifies to store model information along with the results. This adds overhead to the performance of the system and is not feasible for jobs with many entities, see Model Plot Config.
-
model_snapshot_retention_days
- (long) The time in days that model snapshots are retained for the job. Older snapshots are deleted. The default value is 1 day. For more information about model snapshots, see Model Snapshot Resources.
-
renormalization_window_days
- (long) Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. See Job Resources.
-
results_index_name
-
(string) The name of the index in which to store the machine learning results.
The default value is
shared
, which corresponds to the index name.ml-anomalies-shared
. -
results_retention_days
- (long) Advanced configuration option. The number of days for which job results are retained. See Job Resources.
Authorization
editYou must have manage_ml
, or manage
cluster privileges to use this API.
For more information, see Cluster Privileges.
Examples
editThe following example creates the it-ops-kpi
job:
PUT _xpack/ml/anomaly_detectors/it-ops-kpi { "description":"First simple job", "analysis_config":{ "bucket_span": "5m", "latency": "0ms", "detectors":[ { "detector_description": "low_sum(events_per_min)", "function":"low_sum", "field_name": "events_per_min" } ] }, "data_description": { "time_field":"@timestamp", "time_format":"epoch_ms" } }
When the job is created, you receive the following results:
{ "job_id": "it-ops-kpi", "job_type": "anomaly_detector", "description": "First simple job", "create_time": 1491948238874, "analysis_config": { "bucket_span": "5m", "latency": "0ms", "detectors": [ { "detector_description": "low_sum(events_per_min)", "function": "low_sum", "field_name": "events_per_min", "detector_rules": [] } ], "influencers": [] }, "data_description": { "time_field": "@timestamp", "time_format": "epoch_ms" }, "model_snapshot_retention_days": 1, "results_index_name": "shared" }
Was this helpful?
Thank you for your feedback.