- Heartbeat Reference: other versions:
- Heartbeat overview
- Quick start: installation and configuration
- Set up and run
- Configure
- Monitors
- Task scheduler
- General settings
- Project paths
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_json_fields
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- rate_limit
- registered_domain
- rename
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- heartbeat.reference.yml
- How to guides
- Exported fields
- Beat fields
- Cloud provider metadata fields
- Common heartbeat monitor fields
- Docker fields
- ECS fields
- Host fields
- HTTP monitor fields
- ICMP fields
- Jolokia Discovery autodiscover provider fields
- Kubernetes fields
- Process fields
- Host lookup fields
- SOCKS5 proxy fields
- Monitor summary fields
- TCP layer fields
- TLS encryption layer fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Common problems
- Heartbeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Contribute to Beats
Parse data by using ingest node
editParse data by using ingest node
editWhen you use Elasticsearch for output, you can configure Heartbeat to use ingest node to pre-process documents before the actual indexing takes place in Elasticsearch. Ingest node is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of Logstash. For example, you can create an ingest node pipeline in Elasticsearch that consists of one processor that removes a field in a document followed by another processor that renames a field.
After defining the pipeline in Elasticsearch, you simply configure Heartbeat
to use the pipeline. To configure Heartbeat, you specify the pipeline ID in
the parameters
option under elasticsearch
in the heartbeat.yml
file:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: my_pipeline_id
For example, let’s say that you’ve defined the following pipeline in a file
named pipeline.json
:
{ "description": "Test pipeline", "processors": [ { "lowercase": { "field": "agent.name" } } ] }
To add the pipeline in Elasticsearch, you would run:
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json
Then in the heartbeat.yml
file, you would specify:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: "test-pipeline"
When you run Heartbeat, the value of agent.name
is converted to lowercase before indexing.
For more information about defining a pre-processing pipeline, see the Ingest Node documentation.