Plaintext application logs

edit

Ingest and parse plaintext logs, including existing logs, from any programming language or framework without modifying your application or its configuration.

Plaintext logs require some additional setup that structured logs do not require:

  • To search, filter, and aggregate effectively, you need to parse plaintext logs using an ingest pipeline to extract structured fields. Parsing is based on log format, so you might have to maintain different settings for different applications.
  • To correlate plaintext logs, you need to inject IDs into log messages and parse them using an ingest pipeline.

To ingest, parse, and correlate plaintext logs:

  1. Ingest plaintext logs with Filebeat or Elastic Agent and parse them before indexing with an ingest pipeline.
  2. Correlate plaintext logs with an APM agent.
  3. View logs in Logs Explorer

Ingest logs

edit

Send application logs to Elasticsearch using one of the following shipping tools:

  • Filebeat A lightweight data shipper that sends log data to Elasticsearch.
  • Elastic Agent A single agent for logs, metrics, security data, and threat prevention. Combined with Fleet, you can centrally manage Elastic Agent policies and lifecycles directly from Kibana.
Ingest logs with Filebeat
edit

Follow these steps to ingest application logs with Filebeat.

Step 1: Install Filebeat
edit

Install Filebeat on the server you want to monitor by running the commands that align with your system:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-darwin-x86_64.tar.gz
tar xzvf filebeat-9.0.0-beta1-darwin-x86_64.tar.gz
Step 2: Connect to Elasticsearch
edit

Connect to Elasticsearch using an API key to set up Filebeat. Set the following information in the filebeat.yml file:

output.elasticsearch:
  hosts: ["your-projects-elasticsearch-endpoint"]
  api_key: "id:api_key"
  1. Set the hosts to your deployment’s Elasticsearch endpoint. Copy the Elasticsearch endpoint from Help menu (help icon) → Connection details. For example, https://my-deployment.es.us-central1.gcp.cloud.es.io:443.
  2. From Developer tools, run the following command to create an API key that grants manage permissions for the cluster and the filebeat-* indices using:

    POST /_security/api_key
    {
      "name": "filebeat_host001",
      "role_descriptors": {
        "filebeat_writer": {
          "cluster": ["manage"],
          "index": [
            {
              "names": ["filebeat-*"],
              "privileges": ["manage", "create_doc"]
            }
          ]
        }
      }
    }

    Refer to Grant access using API keys for more information.

Step 3: Configure Filebeat
edit

Add the following configuration to your filebeat.yaml file to start collecting log data.

filebeat.inputs:
- type: filestream  
  enabled: true
  paths: /path/to/logs.log  

Reads lines from an active log file.

Paths that you want Filebeat to crawl and fetch logs from.

Step 4: Set up and start Filebeat
edit

Filebeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets:

From the Filebeat installation directory, set the index template by running the command that aligns with your system:

filebeat setup -e

From the Filebeat installation directory, start filebeat by running the command that aligns with your system:

sudo service filebeat start

If you use an init.d script to start Filebeat, you can’t specify command line flags (see Command reference). To specify flags, start Filebeat in the foreground.

Also see Filebeat and systemd.

Step 5: Parse logs with an ingest pipeline
edit

Use an ingest pipeline to parse the contents of your logs into structured, Elastic Common Schema (ECS)-compatible fields.

Create an ingest pipeline that defines a dissect processor to extract structured ECS fields from your log messages. In your project, navigate to Developer Tools and using a command similar to the following example:

PUT _ingest/pipeline/filebeat* 
{
  "description": "Extracts the timestamp log level and host ip",
  "processors": [
    {
      "dissect": { 
        "field": "message", 
        "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" 
      }
    }
  ]
}

_ingest/pipeline/filebeat*: The name of the pipeline. Update the pipeline name to match the name of your data stream. For more information, refer to Data stream naming scheme.

processors.dissect: Adds a dissect processor to extract structured fields from your log message.

field: The field you’re extracting data from, message in this case.

pattern: The pattern of the elements in your log data. The pattern varies depending on your log format. %{@timestamp} is required. %{log.level}, %{host.ip}, and %{message} are common ECS fields. This pattern would match a log file in this format: 2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.

Refer to Extract structured fields for more on using ingest pipelines to parse your log data.

After creating your pipeline, specify the pipeline for filebeat in the filebeat.yml file:

output.elasticsearch:
  hosts: ["your-projects-elasticsearch-endpoint"]
  api_key: "id:api_key"
  pipeline: "your-pipeline" 

Add the pipeline output and the name of your pipeline to the output.

Ingest logs with the Elastic Agent
edit

Follow these steps to ingest and centrally manage your logs using Elastic Agent and Fleet.

Step 1: Add the custom logs integration to your project
edit

To add the custom logs integration to your project:

  1. From your deployment’s home page, click Add Integrations.
  2. Type custom in the search bar and select Custom Logs.
  3. Click Add Custom Logs.
  4. Click Install Elastic Agent at the bottom of the page, and follow the instructions for your system to install the Elastic Agent.
  5. After installing the Elastic Agent, configure the integration from the Add Custom Logs integration page.
  6. Give your integration a meaningful name and description.
  7. Add the Log file path. For example, /var/log/your-logs.log.
  8. Give your agent policy a name. The agent policy defines the data your Elastic Agent collects.
  9. Save your integration to add it to your deployment.
Step 2: Add an ingest pipeline to your integration
edit

To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, Elastic Common Schema (ECS)-compatible fields.

  1. From the custom logs integration, select Integration policies tab.
  2. Select the integration policy you created in the previous section.
  3. Click Change defaults → Advanced options.
  4. Under Ingest pipelines, click Add custom pipeline.
  5. Create an ingest pipeline with a dissect processor to extract structured fields from your log messages.

    Click Import processors and add a similar JSON to the following example:

    {
      "description": "Extracts the timestamp log level and host ip",
      "processors": [
        {
          "dissect": { 
            "field": "message", 
            "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" 
          }
        }
      ]
    }

    processors.dissect: Adds a dissect processor to extract structured fields from your log message.

    field: The field you’re extracting data from, message in this case.

    pattern: The pattern of the elements in your log data. The pattern varies depending on your log format. %{@timestamp}, %{log.level}, %{host.ip}, and %{message} are common ECS fields. This pattern would match a log file in this format: 2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.

  6. Click Create pipeline.
  7. Save and deploy your integration.

Correlate logs

edit

Correlate your application logs with trace events to:

  • view the context of a log and the parameters provided by a user
  • view all logs belonging to a particular trace
  • easily move between logs and traces when debugging application issues

Log correlation works on two levels:

  • at service level: annotation with service.name, service.version, and service.environment allow you to link logs with APM services
  • at trace level: annotation with trace.id and transaction.id allow you to link logs with traces

Learn about correlating plaintext logs in the agent-specific ingestion guides:

View logs

edit

To view logs ingested by Filebeat, go to Discover from the main menu and create a data view based on the filebeat-* index pattern. Refer to Create a data view for more information.

To view logs ingested by Elastic Agent, go to Logs Explorer by clicking Explorer under Logs from the Observability main menu. Refer to the Filter and aggregate logs documentation for more information on viewing and filtering your logs in Kibana.