- Filebeat Reference: other versions:
- Filebeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- Modules
- General settings
- Project paths
- Config file loading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- rate_limit
- registered_domain
- rename
- script
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Load balancing
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- filebeat.reference.yml
- How to guides
- Override configuration settings
- Load the Elasticsearch index template
- Change the index name
- Load Kibana dashboards
- Load ingest pipelines
- Enrich events with geoIP information
- Deduplicate data
- Parse data using an ingest pipeline
- Use environment variables in the configuration
- Avoid YAML formatting problems
- Migrate
log
input configurations tofilestream
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- AWS Fargate module
- Azure module
- Barracuda module
- Bluecoat module
- CEF module
- Check Point module
- Cisco module
- CoreDNS module
- Crowdstrike module
- Cyberark module
- Cyberark PAS module
- Cylance module
- Elasticsearch module
- Envoyproxy Module
- F5 module
- Fortinet module
- Google Cloud module
- Google Workspace module
- GSuite module
- HAproxy module
- IBM MQ module
- Icinga module
- IIS module
- Imperva module
- Infoblox module
- Iptables module
- Juniper module
- Kafka module
- Kibana module
- Logstash module
- Microsoft module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- MySQL Enterprise module
- NATS module
- NetFlow module
- Netscout module
- Nginx module
- Office 365 module
- Okta module
- Oracle module
- Osquery module
- Palo Alto Networks module
- pensando module
- PostgreSQL module
- Proofpoint module
- RabbitMQ module
- Radware module
- Redis module
- Santa module
- Snort module
- Snyk module
- Sonicwall module
- Sophos module
- Squid module
- Suricata module
- System module
- Threat Intel module
- Tomcat module
- Traefik module
- Zeek (Bro) Module
- ZooKeeper module
- Zoom module
- Zscaler module
- Exported fields
- ActiveMQ fields
- Apache fields
- Auditd fields
- AWS fields
- aws-cloudwatch fields
- AWS Fargate fields
- Azure fields
- Barracuda Web Application Firewall fields
- Beat fields
- Blue Coat Director fields
- Decode CEF processor fields fields
- CEF fields
- Checkpoint fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Crowdstrike fields
- Cyber-Ark fields
- CyberArk PAS fields
- CylanceProtect fields
- Docker fields
- ECS fields
- Elasticsearch fields
- Envoyproxy fields
- Big-IP Access Policy Manager fields
- Fortinet fields
- Google Cloud Platform (GCP) fields
- google_workspace fields
- gsuite fields
- HAProxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- Imperva SecureSphere fields
- Infoblox NIOS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Juniper JUNOS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- Microsoft fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- MySQL Enterprise fields
- NATS fields
- NetFlow fields
- Arbor Peakflow SP fields
- Nginx fields
- Office 365 fields
- Okta fields
- Oracle fields
- Osquery fields
- panw fields
- Pensando fields
- PostgreSQL fields
- Process fields
- Proofpoint Email Security fields
- RabbitMQ fields
- Radware DefensePro fields
- Redis fields
- s3 fields
- Google Santa fields
- Snort/Sourcefire fields
- Snyk fields
- Sonicwall-FW fields
- sophos fields
- Squid fields
- Suricata fields
- System fields
- threatintel fields
- Apache Tomcat fields
- Traefik fields
- Zeek fields
- ZooKeeper fields
- Zoom fields
- Zscaler NSS fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Common problems
- Error extracting container id while using Kubernetes metadata
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- Contribute to Beats
Add Nomad metadata
editAdd Nomad metadata
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
The add_nomad_metadata
processor adds fields with relevant metadata for
applications deployed in Nomad.
Each event is annotated with the following information:
- Allocation name, identifier and status.
- Job name and type.
- Namespace where the job is deployed.
- Datacenter and region where the agent runnning the allocation is located.
processors: - add_nomad_metadata: ~
It has the following settings to configure the connection:
-
address
-
(Optional) The URL of the agent API used to request the metadata. It
uses
http://127.0.0.1:4646
by default. -
namespace
- (Optional) Namespace to watch. If set, only events for allocations in this namespace will be annotated.
-
region
- (Optional) Region to watch. If set, only events for allocations in this region will be annotated.
-
secret_id
- (Optional) SecretID to use when connecting with the agent API. This is an example ACL policy to apply to the token.
namespace "*" { policy = "read" } node { policy = "read" } agent { policy = "read" }
-
refresh_interval
- (Optional) Interval used to updated the cached metadata. It defaults to 30 seconds.
-
cleanup_timeout
- (Optional) After an allocation has been removed, time to wait before cleaning up their associated resources. This is useful if you expect to receive events after an allocation has been removed, what can happen when collecting logs. It defaults to 60 seconds.
You can decide if Filebeat should annotate events related to allocations in local node or on the whole cluster configuring the scope with the following settings:
-
scope
-
(Optional) Scope of the resources to watch. It can be
node
to get metadata only for the allocations in a single agent, orglobal
, to get metadata for allocations running on any agent. It defaults tonode
. -
node
-
(Optional) When using
scope: node
, usenode
to specify the name of the local node if it cannot be discovered automatically.
For example the following configuration could be used if Filebeat is collecting events from all the allocations in the cluster:
processors: - add_nomad_metadata: scope: global
Indexers and matchers
editIndexers and matchers are used to correlate fields in events with actual metadata. Filebeat uses this information to know what metadata to include in each event.
Indexers
editIndexers use allocation metadata to create unique identifiers for each one of the pods.
Avaliable indexers are:
allocation_name
:: Identifies allocations by its name and namespace (as
<namespace>/<name>)
`allocation_uuid
:: Identifies allocations by its unique identifier.
Matchers
editMatchers are used to construct the lookup keys that match with the identifiers created by indexes.
field_format
editLooks up allocation metadata using a key created with a string format that can include event fields.
This matcher has an option format
to define the string format. This string
format can contain placeholders for any field in the event.
For example, the following configuration uses the allocation_name
indexer to identify
the allocation metadata by its name and namespace, and uses custom fields
existing in the event as match keys:
processors: - add_nomad_metadata: ... default_indexers.enabled: false default_matchers.enabled: false indexers: - allocation_name: matchers: - field_format: format: '%{[labels.nomad_namespace]}/%{[fields.nomad_alloc_name]}'
fields
editLooks up allocation metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used.
This matcher has an option lookup_fields
to define the fields whose value will
be used for lookup.
For example, the following configuration uses the allocation_uuid
indexer to
identify allocations, and defines a matcher that uses some fields where the
allocation UUID can be found for lookup, the first it finds in the event:
processors: - add_nomad_metadata: ... default_indexers.enabled: false default_matchers.enabled: false indexers: - allocation_uuid: matchers: - fields: lookup_fields: ['host.name', 'fields.nomad_alloc_uuid']
logs_path
editLooks up allocation metadata using identifiers extracted from the log path stored in
the log.file.path
field.
This matcher has an optional logs_path
option with the base path of the
directory containing the logs for the local agent.
The default configuration is able to lookup the metadata using the allocation
UUID when the logs are collected under /var/lib/nomad
.
For example the following configuration would use the allocation UUID when the logs
are collected from /var/lib/NomadClient001/alloc/<alloc UUID>/alloc/logs/...
.
processors: - add_nomad_metadata: ... default_indexers.enabled: false default_matchers.enabled: false indexers: - allocation_uuid: matchers: - logs_path: logs_path: '/var/lib/NomadClient001'
ElasticON events are back!
Learn about the Elastic Search AI Platform from the experts at our live events.
Register now