- Elastic Cloud Serverless
- Elasticsearch
- Elastic Observability
- Get started
- Observability overview
- Elastic Observability Serverless billing dimensions
- Create an Observability project
- Quickstart: Monitor hosts with Elastic Agent
- Quickstart: Monitor your Kubernetes cluster with Elastic Agent
- Quickstart: Monitor hosts with OpenTelemetry
- Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)
- Quickstart: Collect data with AWS Firehose
- Get started with dashboards
- Applications and services
- Application performance monitoring (APM)
- Get started with traces and APM
- Learn about data types
- Collect application data
- View and analyze data
- Act on data
- Use APM securely
- Reduce storage
- Managed intake service event API
- Troubleshooting
- Synthetic monitoring
- Get started
- Scripting browser monitors
- Configure lightweight monitors
- Manage monitors
- Work with params and secrets
- Analyze monitor data
- Monitor resources on private networks
- Use the CLI
- Configure a Synthetics project
- Multifactor Authentication for browser monitors
- Configure Synthetics settings
- Grant users access to secured resources
- Manage data retention
- Scale and architect a deployment
- Synthetics Encryption and Security
- Troubleshooting
- Application performance monitoring (APM)
- Infrastructure and hosts
- Logs
- Inventory
- Incident management
- Data set quality
- Observability AI Assistant
- Machine learning
- Reference
- Get started
- Elastic Security
- Elastic Security overview
- Security billing dimensions
- Create a Security project
- Elastic Security requirements
- Elastic Security UI
- AI for Security
- Ingest data
- Configure endpoint protection with Elastic Defend
- Manage Elastic Defend
- Endpoints
- Policies
- Trusted applications
- Event filters
- Host isolation exceptions
- Blocklist
- Optimize Elastic Defend
- Event capture and Elastic Defend
- Endpoint protection rules
- Identify antivirus software on your hosts
- Allowlist Elastic Endpoint in third-party antivirus apps
- Elastic Endpoint self-protection features
- Elastic Endpoint command reference
- Endpoint response actions
- Cloud Security
- Explore your data
- Dashboards
- Detection engine overview
- Rules
- Alerts
- Advanced Entity Analytics
- Investigation tools
- Asset management
- Manage settings
- Troubleshooting
- Manage your project
- Changelog
Analyze log spikes and drops
editAnalyze log spikes and drops
editElastic Observability Serverless provides built-in log rate analysis capabilities, based on advanced statistical methods, to help you find and investigate the causes of unusual spikes or drops in log rates.
To analyze log spikes and drops:
- In your Elastic Observability Serverless project, go to Machine learning → Log rate analysis.
- Choose a data view or saved search to access the log data you want to analyze.
-
In the histogram chart, click a spike (or drop) and then run the analysis.
When the analysis runs, it identifies statistically significant field-value combinations that contribute to the spike or drop, and then displays them in a table:
Notice that you can optionally turn on Smart grouping to summarize the results into groups. You can also click Filter fields to remove fields that are not relevant.
The table shows an indicator of the level of impact and a sparkline showing the shape of the impact in the chart.
- Select a row to display the impact of the field on the histogram chart.
- From the Actions menu in the table, you can choose to view the field in Discover, view it in Log Pattern Analysis, or copy the table row information to the clipboard as a query filter.
To pin a table row, click the row, then move the cursor to the histogram chart. It displays a tooltip with exact count values for the pinned field which enables closer investigation.
Brushes in the chart show the baseline time range and the deviation in the analyzed data. You can move the brushes to redefine both the baseline and the deviation and rerun the analysis with the modified values.
Log pattern analysis
editUse log pattern analysis to find patterns in unstructured log messages and examine your data. When you run a log pattern analysis, it performs categorization analysis on a selected field, creates categories based on the data, and then displays them together in a chart. The chart shows the distribution of each category and an example document that matches the category. Log pattern analysis is useful when you want to examine how often different types of logs appear in your data set. It also helps you group logs in ways that go beyond what you can achieve with a terms aggregation.
To run log pattern analysis:
- Follow the steps under Analyze log spikes and drops to run a log rate analysis.
- From the Actions menu, choose View in Log Pattern Analysis.
- Select a category field and optionally apply any filters that you want.
-
Click Run pattern analysis.
The results of the analysis are shown in a table:
- From the Actions menu, click the plus (or minus) icon to open Discover and show (or filter out) the given category there, which helps you to further examine your log messages.
On this page