- Functionbeat Reference:
- Overview
- Getting Started With Functionbeat
- Setting up and deploying Functionbeat
- Configuring Functionbeat
- Configure functions
- Specify general settings
- Configure the internal queue
- Configure the output
- Configure index lifecycle management
- Specify SSL settings
- Filter and enhance the exported data
- Define processors
- Add cloud metadata
- Add fields
- Add labels
- Add the local time zone
- Add tags
- Decode JSON fields
- Decode Base64 fields
- Decompress gzip fields
- Community ID Network Flow Hash
- Convert
- Drop events
- Drop fields from events
- Extract array
- Keep fields from events
- Registered Domain
- Rename fields from events
- Add Kubernetes metadata
- Add Docker metadata
- Add Host metadata
- Add Observer metadata
- Dissect strings
- DNS Reverse Lookup
- Add process metadata
- Parse data by using ingest node
- Enrich events with geoIP information
- Configure the Kibana endpoint
- Load the Elasticsearch index template
- Configure logging
- Use environment variables in the configuration
- YAML tips and gotchas
- Regular expression support
- functionbeat.reference.yml
- Exported fields
- Monitoring Functionbeat
- Securing Functionbeat
- Troubleshooting
- Get help
- Debug
- Common problems
- Deployment to AWS fails with "failed to create the stack"
- Deployment to AWS fails with "resource limit exceeded"
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
Parse data by using ingest node
editParse data by using ingest node
editWhen you use Elasticsearch for output, you can configure Functionbeat to use ingest node to pre-process documents before the actual indexing takes place in Elasticsearch. Ingest node is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of Logstash. For example, you can create an ingest node pipeline in Elasticsearch that consists of one processor that removes a field in a document followed by another processor that renames a field.
After defining the pipeline in Elasticsearch, you simply configure Functionbeat
to use the pipeline. To configure Functionbeat, you specify the pipeline ID in
the parameters
option under elasticsearch
in the functionbeat.yml
file:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: my_pipeline_id
For example, let’s say that you’ve defined the following pipeline in a file
named pipeline.json
:
{ "description": "Test pipeline", "processors": [ { "lowercase": { "field": "agent.name" } } ] }
To add the pipeline in Elasticsearch, you would run:
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json
Then in the functionbeat.yml
file, you would specify:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: "test-pipeline"
When you run Functionbeat, the value of agent.name
is converted to lowercase before indexing.
For more information about defining a pre-processing pipeline, see the Ingest Node documentation.