- Journalbeat Reference for 6.5-7.15:
- Journalbeat overview
- Quick start: installation and configuration
- Set up and run
- Configure
- Inputs
- General settings
- Project paths
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_csv_fields
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- rate_limit
- registered_domain
- rename
- script
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Internal queue
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- journalbeat.reference.yml
- How to guides
- Exported fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Common problems
- Journalbeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
Parse data by using ingest node
editParse data by using ingest node
editWhen you use Elasticsearch for output, you can configure Journalbeat to use ingest node to pre-process documents before the actual indexing takes place in Elasticsearch. Ingest node is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of Logstash. For example, you can create an ingest node pipeline in Elasticsearch that consists of one processor that removes a field in a document followed by another processor that renames a field.
After defining the pipeline in Elasticsearch, you simply configure Journalbeat
to use the pipeline. To configure Journalbeat, you specify the pipeline ID in
the parameters
option under elasticsearch
in the journalbeat.yml
file:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: my_pipeline_id
For example, let’s say that you’ve defined the following pipeline in a file
named pipeline.json
:
{ "description": "Test pipeline", "processors": [ { "lowercase": { "field": "agent.name" } } ] }
To add the pipeline in Elasticsearch, you would run:
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json
Then in the journalbeat.yml
file, you would specify:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: "test-pipeline"
When you run Journalbeat, the value of agent.name
is converted to lowercase before indexing.
For more information about defining a pre-processing pipeline, see the Ingest Node documentation.