Log monitoring

edit

[preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

Elastic Observability allows you to deploy and manage logs at a petabyte scale, giving you insights into your logs in minutes. You can also search across your logs in one place, troubleshoot in real time, and detect patterns and outliers with categorization and anomaly detection. For more information, refer to the following links:

Send logs data to your project

edit

You can send logs data to your project in different ways depending on your needs:

  • Elastic Agent
  • Filebeat

When choosing between Elastic Agent and Filebeat, consider the different features and functionalities between the two options. See Beats and Elastic Agent capabilities for more information on which option best fits your situation.

Elastic Agent
edit

Elastic Agent uses integrations to ingest logs from Kubernetes, MySQL, and many more data sources. You have the following options when installing and managing an Elastic Agent:

Fleet-managed Elastic Agent
edit

Install an Elastic Agent and use Fleet to define, configure, and manage your agents in a central location.

See install Fleet-managed Elastic Agent.

Standalone Elastic Agent
edit

Install an Elastic Agent and manually configure it locally on the system where it’s installed. You are responsible for managing and upgrading the agents.

See install standalone Elastic Agent.

Elastic Agent in a containerized environment
edit

Run an Elastic Agent inside of a container — either with Fleet Server or standalone.

See install Elastic Agent in containers.

Filebeat
edit

Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as a service on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them to your Observability project for indexing.

Configure logs

edit

The following resources provide information on configuring your logs:

  • Data streams: Efficiently store append-only time series data in multiple backing indices partitioned by time and size.
  • Data views: Query log entries from the data streams of specific datasets or namespaces.
  • Index lifecycle management: Configure the built-in logs policy based on your application’s performance, resilience, and retention requirements.
  • Ingest pipeline: Parse and transform log entries into a suitable format before indexing.
  • Mapping: Define how data is stored and indexed.

View and monitor logs

edit

Use Logs Explorer to search, filter, and tail all your logs ingested into your project in one place.

The following resources provide information on viewing and monitoring your logs:

  • Discover and explore: Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view.
  • Detect log anomalies: Use machine learning to detect log anomalies automatically.

Monitor data sets

edit

The Data Set Quality page provides an overview of your data sets and their quality. Use this information to get an idea of your overall data set quality, and find data sets that contain incorrectly parsed documents.

Monitor data sets

Application logs

edit

Application logs provide valuable insight into events that have occurred within your services and applications. See Application logs.