Logs resource guide
editLogs resource guide
editIn this guide, you’ll find resources on sending log data to Elasticsearch, configuring your logs, and analyzing your logs.
Get started with logs
editIf you’re new to ingesting, viewing, and analyzing logs with Elastic, see Get started with logs and metrics for an overview of adding integrations, installing and running an Elastic Agent, and monitoring logs.
Send logs data to Elasticsearch
editYou can send logs data to Elasticsearch in different ways depending on your needs:
- Elastic Agent
- Filebeat
When choosing between Elastic Agent and Beats, consider the different features and functionalities between the two options. See Elastic Agent and Beats capabilities for more information on which option best fits your situation.
Elastic Agent
editElastic Agent uses integrations to ingest logs from Kubernetes, MySQL, and many more data sources. You have the following options when installing and managing an Elastic Agent:
-
Fleet-managed Elastic Agent
Install an Elastic Agent and use Fleet in Kibana to define, configure, and manage your agents in a central location.
-
Standalone Elastic Agent
Install an Elastic Agent and manually configure it locally on the system where it’s installed. You are responsible for managing and upgrading the agents. This approach is reserved for advanced users only.
-
Elastic Agent in a containerized environment
Run an Elastic Agent inside of a container — either with Fleet Server or standalone.
Filebeat
editFilebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.
- Filebeat overview – general information on Filebeat and how it works.
- Filebeat quick start – basic installation instructions to get you started.
- Set up and run Filebeat – information on how to install, set up, and run Filebeat.
Configure logs
editThe following resources provide information on configuring your logs:
- Data streams – store append-only time series data across multiple indices while giving you a single named resource for requests.
- Data views – to point to your log data from a specific time or to all indices that contain your log data.
- Index lifecycle management – configure the built-in logs policy based on your application’s performance, resilience, and retention requirements.
- Ingest pipeline – to parse server logs in the Common Log Format before indexing.
- Mapping – define how data is stored and indexed.
View and monitor logs
editWith the Logs app in Kibana you can search, filter, and tail all your logs ingested into Elasticsearch in one place.
The following resources provide information on viewing and monitoring your logs:
- Tail log files – monitor all of the log events flowing in from your servers, virtual machines, and containers in a centralized view.
- Inspect log anomalies use machine learning to detect log anomalies automatically.
- Categorize log entries – use machine learning to categorize log messages to quickly identify patterns in your log events.
- Configure data sources – Specify the source configuration for logs in the Logs app settings in the Kibana configuration file.
Application logs
editApplication logs provide valuable insight into events that have occurred within your services and applications. See Application logs.
Create a logs threshold alert
editYou can create a rule to send an alert when the log aggregation exceeds a threshold. See Create a logs threshold rule.