- Auditbeat Reference: other versions:
- Auditbeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade Auditbeat
- Configure
- Modules
- General settings
- Project paths
- Config file reloading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_session_metadata
- add_tags
- append
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- rate_limit
- registered_domain
- rename
- replace
- syslog
- translate_ldap_attribute
- translate_sid
- truncate_fields
- urldecode
- Internal queue
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- Feature flags
- auditbeat.reference.yml
- How to guides
- Modules
- Exported fields
- Monitor
- Secure
- Troubleshoot
- Get Help
- Debug
- Understand logged metrics
- Common problems
- Auditbeat fails to watch folders because too many files are open
- Auditbeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- High RSS memory usage due to MADV settings
- Contribute to Beats
Parse data using an ingest pipeline
editParse data using an ingest pipeline
editWhen you use Elasticsearch for output, you can configure Auditbeat to use an ingest pipeline to pre-process documents before the actual indexing takes place in Elasticsearch. An ingest pipeline is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of Logstash. For example, you can create an ingest pipeline in Elasticsearch that consists of one processor that removes a field in a document followed by another processor that renames a field.
After defining the pipeline in Elasticsearch, you simply configure Auditbeat
to use the pipeline. To configure Auditbeat, you specify the pipeline ID in
the parameters
option under elasticsearch
in the auditbeat.yml
file:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: my_pipeline_id
For example, let’s say that you’ve defined the following pipeline in a file
named pipeline.json
:
{ "description": "Test pipeline", "processors": [ { "lowercase": { "field": "agent.name" } } ] }
To add the pipeline in Elasticsearch, you would run:
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json
Then in the auditbeat.yml
file, you would specify:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: "test-pipeline"
When you run Auditbeat, the value of agent.name
is converted to lowercase before indexing.
For more information about defining a pre-processing pipeline, see the ingest pipeline documentation.