- X-Pack Reference for 6.0-6.2 and 5.x:
- Introduction
- Setting Up X-Pack
- Breaking Changes
- X-Pack APIs
- Graphing Connections in Your Data
- Profiling your Queries and Aggregations
- Reporting from Kibana
- Securing the Elastic Stack
- Getting Started with Security
- How Security Works
- Setting Up User Authentication
- Configuring SAML Single-Sign-On on the Elastic Stack
- Configuring Role-based Access Control
- Auditing Security Events
- Encrypting Communications
- Restricting Connections with IP Filtering
- Cross Cluster Search, Tribe, Clients and Integrations
- Reference
- Monitoring the Elastic Stack
- Alerting on Cluster and Index Events
- Machine Learning in the Elastic Stack
- Troubleshooting
- Getting Help
- X-Pack security
- Can’t log in after upgrading to 6.2.4
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- X-Pack Watcher
- X-Pack monitoring
- X-Pack machine learning
- Limitations
- License Management
- Release Notes
WARNING: Version 6.2 of the Elastic Stack has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Function Reference
editFunction Reference
editThe X-Pack machine learning features include analysis functions that provide a wide variety of flexible ways to analyze data for anomalies.
When you create jobs, you specify one or more detectors, which define the type of analysis that needs to be done. If you are creating your job by using machine learning APIs, you specify the functions in Detector Configuration Objects. If you are creating your job in Kibana, you specify the functions differently depending on whether you are creating single metric, multi-metric, or advanced jobs. For a demonstration of creating jobs in Kibana, see Getting Started.
Most functions detect anomalies in both low and high values. In statistical
terminology, they apply a two-sided test. Some functions offer low and high
variations (for example, count
, low_count
, and high_count
). These variations
apply one-sided tests, detecting anomalies only when the values are low or
high, depending one which alternative is used.
You can specify a summary_count_field_name
with any function except metric
.
When you use summary_count_field_name
, the machine learning features expect the input
data to be pre-aggregated. The value of the summary_count_field_name
field
must contain the count of raw events that were summarized. In Kibana, use the
summary_count_field_name in advanced jobs. Analyzing aggregated input data
provides a significant boost in performance. For more information, see
Aggregating Data For Faster Performance.
If your data is sparse, there may be gaps in the data which means you might have
empty buckets. You might want to treat these as anomalies or you might want these
gaps to be ignored. Your decision depends on your use case and what is important
to you. It also depends on which functions you use. The sum
and count
functions are strongly affected by empty buckets. For this reason, there are
non_null_sum
and non_zero_count
functions, which are tolerant to sparse data.
These functions effectively ignore empty buckets.
ElasticON events are back!
Learn about the Elastic Search AI Platform from the experts at our live events.
Register now