Plaintext application logs
editPlaintext application logs
edit[preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
Ingest and parse plaintext logs, including existing logs, from any programming language or framework without modifying your application or its configuration.
Plaintext logs require some additional setup that structured logs do not require:
- To search, filter, and aggregate effectively, you need to parse plaintext logs using an ingest pipeline to extract structured fields. Parsing is based on log format, so you might have to maintain different settings for different applications.
- To correlate plaintext logs, you need to inject IDs into log messages and parse them using an ingest pipeline.
To ingest, parse, and correlate plaintext logs:
- Ingest plaintext logs with Filebeat or Elastic Agent and parse them before indexing with an ingest pipeline.
- Correlate plaintext logs with an APM agent.
- View logs in Logs Explorer
Ingest logs
editSend application logs to your project using one of the following shipping tools:
- Filebeat: A lightweight data shipper that sends log data to your project.
- Elastic Agent: A single agent for logs, metrics, security data, and threat prevention. With Fleet, you can centrally manage Elastic Agent policies and lifecycles directly from your project.
Use Filebeat version 8.11+ for the best experience when ingesting logs with Filebeat.
Follow these steps to ingest application logs with Filebeat.
Install Filebeat on the server you want to monitor by running the commands that align with your system:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-darwin-x86_64.tar.gz tar xzvf filebeat-9.0.0-beta1-darwin-x86_64.tar.gz
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-linux-x86_64.tar.gz tar xzvf filebeat-9.0.0-beta1-linux-x86_64.tar.gz
- Download the Filebeat Windows zip file: https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-windows-x86_64.zip
-
Extract the contents of the zip file into
C:\Program Files
. -
Rename the
filebeat-((version))-windows-x86_64
directory to((filebeat))
. - Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select Run As Administrator).
-
From the PowerShell prompt, run the following commands to install Filebeat as a Windows service:
PS > cd 'C:\Program Files{filebeat}' PS C:\Program Files{filebeat}> .\install-service-filebeat.ps1
If script execution is disabled on your system, you need to set the
execution policy for the current session to allow the script to run. For
example:
PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1
.
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-amd64.deb sudo dpkg -i filebeat-9.0.0-beta1-amd64.deb
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-x86_64.rpm sudo rpm -vi filebeat-9.0.0-beta1-x86_64.rpm
Connect to your project using an API key to set up Filebeat. Set the following information in the filebeat.yml
file:
output.elasticsearch: hosts: ["your-projects-elasticsearch-endpoint"] api_key: "id:api_key"
-
Set the
hosts
to your project’s Elasticsearch endpoint. Locate your project’s endpoint by clicking the help icon () and selecting Endpoints. Add the Elasticsearch endpoint to your configuration. -
From Developer tools, run the following command to create an API key that grants
manage
permissions for thecluster
and thefilebeat-*
indices using:POST /_security/api_key { "name": "your_api_key", "role_descriptors": { "filebeat_writer": { "cluster": ["manage"], "index": [ { "names": ["filebeat-*"], "privileges": ["manage", "create_doc"] } ] } } }
Refer to Grant access using API keys for more information.
Add the following configuration to the filebeat.yaml
file to start collecting log data.
You can add additional settings to the filebeat.yml
file to meet the needs of your specific set up. For example, the following settings would add a parser to manage messages that span multiple lines and add service fields:
parsers: - multiline: type: pattern pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' negate: true match: after fields_under_root: true fields: service.name: your_service_name service.environment: your_service_environment event.dataset: your_event_dataset
From the Filebeat installation directory, set the index template by running the command that aligns with your system:
./filebeat setup --index-management
./filebeat setup --index-management
PS > .\filebeat.exe setup --index-management
filebeat setup --index-management
filebeat setup --index-management
from the Filebeat installation directory, start filebeat by running the command that aligns with your system:
sudo chown root filebeat.yml sudo ./filebeat -e
You’ll be running Filebeat as root, so you need to change ownership of the configuration file and any configurations enabled in the modules.d
directory, or run Filebeat with --strict.perms=false
specified. Refer to Config file ownership and permissions.
sudo chown root filebeat.yml sudo ./filebeat -e
You’ll be running Filebeat as root, so you need to change ownership of the configuration file and any configurations enabled in the modules.d
directory, or run Filebeat with --strict.perms=false
specified. Refer to Config file ownership and permissions.
PS C:\Program Files\filebeat> Start-Service filebeat
By default, Windows log files are stored in C:\ProgramData\filebeat\Logs
.
sudo service filebeat start
If you use an init.d script to start Filebeat, you can’t specify command line flags (refer to Command reference). To specify flags, start Filebeat in the foreground.
Also, refer to Filebeat and systemd.
sudo service filebeat start
If you use an init.d script to start Filebeat, you can’t specify command line flags (refer to Command reference). To specify flags, start Filebeat in the foreground.
Also, refer to Filebeat and systemd.
Use an ingest pipeline to parse the contents of your logs into structured, Elastic Common Schema (ECS)-compatible fields.
Create an ingest pipeline with a dissect processor to extract structured ECS fields from your log messages. In your project, go to Developer Tools and use a command similar to the following example:
PUT _ingest/pipeline/filebeat* { "description": "Extracts the timestamp log level and host ip", "processors": [ { "dissect": { "field": "message", "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" } } ] }
|
|
|
|
|
|
|
Refer to Extract structured fields for more on using ingest pipelines to parse your log data.
After creating your pipeline, specify the pipeline for filebeat in the filebeat.yml
file:
output.elasticsearch: hosts: ["your-projects-elasticsearch-endpoint"] api_key: "id:api_key" pipeline: "your-pipeline"
Follow these steps to ingest and centrally manage your logs using Elastic Agent and Fleet.
To add the custom logs integration to your project:
- In your Observability project, go to Project Settings → Integrations.
-
Type
custom
in the search bar and select Custom Logs. - Click Add Custom Logs.
- Click Install Elastic Agent at the bottom of the page, and follow the instructions for your system to install the Elastic Agent.
- After installing the Elastic Agent, configure the integration from the Add Custom Logs integration page.
- Give your integration a meaningful name and description.
-
Add the Log file path. For example,
/var/log/your-logs.log
. - An agent policy is created that defines the data your Elastic Agent collects. If you’ve previously installed an Elastic Agent on the host you’re collecting logs from, you can select the Existing hosts tab and use an existing agent policy.
- Click Save and continue.
You can add additional settings to the integration under Custom log file by clicking Advanced options and adding YAML configurations to the Custom configurations. For example, the following settings would add a parser to manage messages that span multiple lines and add service fields. Service fields are used for Log correlation.
parsers: - multiline: type: pattern pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' negate: true match: after fields_under_root: true fields: service.name: your_service_name service.version: your_service_version service.environment: your_service_environment
for Log correlation, add the |
To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, Elastic Common Schema (ECS)-compatible fields.
- From the custom logs integration, select Integration policies tab.
- Select the integration policy you created in the previous section.
- Click Change defaults → Advanced options.
- Under Ingest pipelines, click Add custom pipeline.
-
Create an ingest pipeline with a dissect processor to extract structured fields from your log messages.
Click Import processors and add a similar JSON to the following example:
{ "description": "Extracts the timestamp log level and host ip", "processors": [ { "dissect": { "field": "message", "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" } } ] }
processors.dissect
: Adds a dissect processor to extract structured fields from your log message.field
: The field you’re extracting data from,message
in this case.pattern
: The pattern of the elements in your log data. The pattern varies depending on your log format.%{@timestamp}
,%{log.level}
,%{host.ip}
, and%{message}
are common ECS fields. This pattern would match a log file in this format:2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.
- Click Create pipeline.
- Save and deploy your integration.
Correlate logs
editCorrelate your application logs with trace events to:
- view the context of a log and the parameters provided by a user
- view all logs belonging to a particular trace
- easily move between logs and traces when debugging application issues
Log correlation works on two levels:
-
at service level: annotation with
service.name
,service.version
, andservice.environment
allow you to link logs with APM services -
at trace level: annotation with
trace.id
andtransaction.id
allow you to link logs with traces
Learn about correlating plaintext logs in the agent-specific ingestion guides:
View logs
editTo view logs ingested by Filebeat, go to Discover. Create a data view based on the filebeat-*
index pattern. Refer to Create a data view for more information.
To view logs ingested by Elastic Agent, go to Discover and select the Logs Explorer tab. Refer to the Filter and aggregate logs documentation for more on viewing and filtering your log data.