- Filebeat Reference: other versions:
- Overview
- Getting Started With Filebeat
- Upgrading Filebeat
- How Filebeat Works
- Configuring Filebeat
- Configuration Options (Reference)
- Filebeat Prospectors Configuration
- Filebeat Global Configuration
- General Configuration
- Elasticsearch Output Configuration
- Logstash Output Configuration
- Kafka Output Configuration
- Redis Output Configuration
- File Output Configuration
- Console Output Configuration
- SSL Configuration
- Paths Configuration
- Logging Configuration
- Processors
- Filtering and Enhancing the Exported Data
- Managing Multiline Messages
- Configuring Filebeat to Use Ingest Node
- Using Environment Variables in the Configuration
- Specifying Multiple Prospectors
- Load Balancing
- YAML Tips and Gotchas
- Regular Expression Support
- Configuration Options (Reference)
- Exported Fields
- Securing Filebeat
- Troubleshooting
- Migrating from Logstash Forwarder to Filebeat
WARNING: Version 5.2 of Filebeat has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Configuring Filebeat to Use Ingest Node
editConfiguring Filebeat to Use Ingest Node
editWhen you use Elasticsearch for output, you can configure Filebeat to use ingest node to pre-process documents before the actual indexing takes place in Elasticsearch. Ingest node is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of Logstash. For example, you can create an ingest node pipeline in Elasticsearch that consists of one processor that removes a field in a document followed by another processor that renames a field.
After defining the pipeline in Elasticsearch, you simply configure your Beat to use the pipeline. To configure
Filebeat, you specify the pipeline ID in the parameters
option under elasticsearch
in the
filebeat.yml
file:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: my_pipeline_id
For example, let’s say that you’ve defined the following pipeline in a file named pipeline.json
:
{ "description": "Test pipeline", "processors": [ { "lowercase": { "field": "beat.name" } } ] }
To add the pipeline in Elasticsearch, you would run:
curl -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json
Then in the filebeat.yml
file, you would specify:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: "test-pipeline"
When you run Filebeat, the value of beat.name
is converted to lowercase before indexing.
For more information about defining a pre-processing pipeline, see the Ingest Node documentation.