- Winlogbeat Reference: other versions:
- Winlogbeat Overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- Configure
- Winlogbeat
- General settings
- Project paths
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- append
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- rate_limit
- registered_domain
- rename
- replace
- script
- syslog
- timestamp
- translate_ldap_attribute
- translate_sid
- truncate_fields
- urldecode
- Internal queue
- Logging
- HTTP endpoint
- Instrumentation
- winlogbeat.reference.yml
- How to guides
- Modules
- Exported fields
- Monitor
- Secure
- Troubleshoot
- Get Help
- Debug
- Understand logged metrics
- Common problems
- Dashboard in Kibana is breaking up data fields incorrectly
- Bogus computer_name fields are reported in some events
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- High RSS memory usage due to MADV settings
- Not sure how to read from .evtx files
- Contribute to Beats
Parse data using an ingest pipeline
editParse data using an ingest pipeline
editWhen you use Elasticsearch for output, you can configure Winlogbeat to use an ingest pipeline to pre-process documents before the actual indexing takes place in Elasticsearch. An ingest pipeline is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of Logstash. For example, you can create an ingest pipeline in Elasticsearch that consists of one processor that removes a field in a document followed by another processor that renames a field.
After defining the pipeline in Elasticsearch, you simply configure Winlogbeat
to use the pipeline. To configure Winlogbeat, you specify the pipeline ID in
the parameters
option under elasticsearch
in the winlogbeat.yml
file:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: my_pipeline_id
For example, let’s say that you’ve defined the following pipeline in a file
named pipeline.json
:
{ "description": "Test pipeline", "processors": [ { "lowercase": { "field": "agent.name" } } ] }
To add the pipeline in Elasticsearch, you would run:
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json
Then in the winlogbeat.yml
file, you would specify:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: "test-pipeline"
When you run Winlogbeat, the value of agent.name
is converted to lowercase before indexing.
For more information about defining a pre-processing pipeline, see the ingest pipeline documentation.
ElasticON events are back!
Learn about the Elastic Search AI Platform from the experts at our live events.
Register now