- Filebeat Reference: other versions:
- Filebeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- Modules
- General settings
- Project paths
- Config file loading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_json_fields
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- rate_limit
- registered_domain
- rename
- script
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Load balancing
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- filebeat.reference.yml
- How to guides
- Override configuration settings
- Load the Elasticsearch index template
- Change the index name
- Load Kibana dashboards
- Load ingest pipelines
- Enrich events with geoIP information
- Deduplicate data
- Parse data by using ingest node
- Use environment variables in the configuration
- Avoid YAML formatting problems
- Beats central management
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- Azure module
- Barracuda module
- Bluecoat module
- CEF module
- Check Point module
- Cisco module
- CoreDNS module
- Crowdstrike module
- Cyberark module
- Cylance module
- Elasticsearch module
- Envoyproxy Module
- F5 module
- Fortinet module
- Google Cloud module
- Google Workspace module
- GSuite module
- haproxy module
- IBM MQ module
- Icinga module
- IIS module
- Imperva module
- Infoblox module
- Iptables module
- Juniper module
- Kafka module
- Kibana module
- Logstash module
- Microsoft module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- MySQL Enterprise module
- nats module
- NetFlow module
- Netscout module
- Nginx module
- Office 365 module
- Okta module
- Oracle module
- Osquery module
- Palo Alto Networks module
- PostgreSQL module
- Proofpoint module
- RabbitMQ module
- Radware module
- Redis module
- Santa module
- Snort module
- Snyk module
- Sonicwall module
- Sophos module
- Squid module
- Suricata module
- System module
- Tomcat module
- Traefik module
- Zeek (Bro) Module
- Zoom module
- Zscaler module
- Exported fields
- ActiveMQ fields
- Apache fields
- Auditd fields
- AWS fields
- aws-cloudwatch fields
- Azure fields
- Barracuda Web Application Firewall fields
- Beat fields
- Blue Coat Director fields
- Decode CEF processor fields fields
- CEF fields
- Checkpoint fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Crowdstrike fields
- Cyber-Ark fields
- CylanceProtect fields
- Docker fields
- ECS fields
- Elasticsearch fields
- Envoyproxy fields
- Big-IP Access Policy Manager fields
- Fortinet fields
- Google Cloud Platform (GCP) fields
- google_workspace fields
- gsuite fields
- HAProxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- Imperva SecureSphere fields
- Infoblox NIOS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Juniper JUNOS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- Microsoft fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- MySQL Enterprise fields
- NATS fields
- NetFlow fields
- Arbor Peakflow SP fields
- Nginx fields
- Office 365 fields
- Okta fields
- Oracle fields
- Osquery fields
- panw fields
- PostgreSQL fields
- Process fields
- Proofpoint Email Security fields
- RabbitMQ fields
- Radware DefensePro fields
- Redis fields
- s3 fields
- Google Santa fields
- Snort/Sourcefire fields
- Snyk fields
- Sonicwall-FW fields
- sophos fields
- Squid fields
- Suricata fields
- System fields
- Apache Tomcat fields
- Traefik fields
- Zeek fields
- Zoom fields
- Zscaler NSS fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Common problems
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- Contribute to Beats
Add Kubernetes metadata
editAdd Kubernetes metadata
editThe add_kubernetes_metadata
processor annotates each event with relevant
metadata based on which Kubernetes pod the event originated from.
At startup it detects an in_cluster
environment and caches the
Kubernetes-related metadata. Events are only annotated if a valid configuration
is detected. If it’s not able to detect a valid Kubernetes configuration,
the events are not annotated with Kubernetes-related metadata.
Each event is annotated with:
- Pod Name
- Pod UID
- Namespace
- Labels
In addition Node’s metadata and Namespace’s metadata are being added in Pod’s metadata.
The add_kubernetes_metadata
processor has two basic building blocks which are:
- Indexers
- Matchers
Indexers use pods metadata to create unique identifiers for each one of the
pods, these identifiers help to correlate the metadata of the observed pods with
actual events. For example, the ip_port
indexer can take a Kubernetes pod and
create identifiers for it based on all its pod_ip:container_port
combinations.
Matchers use information in events to construct lookup keys that match the
identifiers created by the indexers. For example, when the fields
matcher takes
["metricset.host"]
as a lookup field, it would construct a lookup key with the
value of the field metricset.host
. When one of this lookup keys match with one
of the identifiers, the event is enriched with the metadata of the identified
pod.
When add_kubernetes_metadata
is used with Filebeat, it uses the
container
indexer and the logs_path
. So events whose path in log.file.path
contains a reference to a container ID are enriched with metadata of the pod of
this container.
This behaviour can be disabled by disabling default indexers and matchers in the configuration:
processors: - add_kubernetes_metadata: default_indexers.enabled: false default_matchers.enabled: false
You can find more information about the available indexers and matchers, and some examples in Indexers and matchers.
The configuration below enables the processor when filebeat is run as a pod in Kubernetes.
processors: - add_kubernetes_metadata:
The configuration below enables the processor on a Beat running as a process on the Kubernetes node.
processors: - add_kubernetes_metadata: host: <hostname> # If kube_config is not set, KUBECONFIG environment variable will be checked # and if not present it will fall back to InCluster kube_config: $Filebeat Reference [7.11]/.kube/config
The configuration below has the default indexers and matchers disabled and enables ones that the user is interested in.
processors: - add_kubernetes_metadata: host: <hostname> # If kube_config is not set, KUBECONFIG environment variable will be checked # and if not present it will fall back to InCluster kube_config: ~/.kube/config default_indexers.enabled: false default_matchers.enabled: false indexers: - ip_port: matchers: - fields: lookup_fields: ["metricset.host"]
The add_kubernetes_metadata
processor has the following configuration settings:
-
host
- (Optional) Specify the node to scope filebeat to in case it cannot be accurately detected, as when running filebeat in host network mode.
-
namespace
- (Optional) Select the namespace from which to collect the metadata. If it is not set, the processor collects metadata from all namespaces. It is unset by default.
-
add_resource_metadata
-
(Optional) Specify labels and annotations filters for the extra metadata coming from Node and Namespace.
add_resource_metadata
can be done fornode
ornamespace
. By default all labels will be included while annotations are not added by default. This settings are useful when labels' and annotations' storing requires special handling to avoid overloading the storage output. The enrichment ofnode
ornamespace
metadata can be individually disabled by settingenabled: false
. Example:
add_resource_metadata: namespace: include_labels: ["namespacelabel1"] node: include_labels: ["nodelabel2"] include_annotations: ["nodeannotation1"]
-
kube_config
-
(Optional) Use given config file as configuration for Kubernetes
client. It defaults to
KUBECONFIG
environment variable if present. -
default_indexers.enabled
- (Optional) Enable/Disable default pod indexers, in case you want to specify your own.
-
default_matchers.enabled
- (Optional) Enable/Disable default pod matchers, in case you want to specify your own.
Indexers and matchers
editIndexers
editIndexers use pods metadata to create unique identifiers for each one of the pods.
Available indexers are:
-
container
- Identifies the pod metadata using the IDs of its containers.
-
ip_port
-
Identifies the pod metadata using combinations of its IP and its exposed ports.
When using this indexer metadata is identified using the IP of the pods, and the
combination if
ip:port
for each one of the ports exposed by its containers. -
pod_name
-
Identifies the pod metadata using its namespace and its name as
namespace/pod_name
. -
pod_uid
- Identifies the pod metadata using the UID of the pod.
Matchers
editMatchers are used to construct the lookup keys that match with the identifiers created by indexes.
field_format
editLooks up pod metadata using a key created with a string format that can include event fields.
This matcher has an option format
to define the string format. This string
format can contain placeholders for any field in the event.
For example, the following configuration uses the ip_port
indexer to identify
the pod metadata by combinations of the pod IP and its exposed ports, and uses
the destination IP and port in events as match keys:
processors: - add_kubernetes_metadata: ... default_indexers.enabled: false default_matchers.enabled: false indexers: - ip_port: matchers: - field_format: format: '%{[destination.ip]}:%{[destination.port]}'
fields
editLooks up pod metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used.
This matcher has an option lookup_fields
to define the files whose value will
be used for lookup.
For example, the following configuration uses the ip_port
indexer to identify
pods, and defines a matcher that uses the destination IP or the server IP for the
lookup, the first it finds in the event:
processors: - add_kubernetes_metadata: ... default_indexers.enabled: false default_matchers.enabled: false indexers: - ip_port: matchers: - fields: lookup_fields: ['destination.ip', 'server.ip']
logs_path
editLooks up pod metadata using identifiers extracted from the log path stored in
the log.file.path
field.
This matcher has the following configuration settings:
-
logs_path
- (Optional) Base path of container logs. If not specified, it uses the default logs path of the platform where Filebeat is running.
-
resource_type
-
(Optional) Type of the resource to obtain the ID of. It can be
pod
, to make the lookup based on the pod UID, orcontainer
, to make the lookup based on the container ID. It defaults tocontainer
.
The default configuration is able to lookup the metadata using the container ID
when the logs are collected from the default docker logs path
(/var/lib/docker/containers/<container ID>/...
on Linux).
For example the following configuration would use the pod UID when the logs are
collected from /var/lib/kubelet/pods/<pod UID>/...
.
processors: - add_kubernetes_metadata: ... default_indexers.enabled: false default_matchers.enabled: false indexers: - pod_uid: matchers: - logs_path: logs_path: '/var/lib/kubelet/pods' resource_type: 'pod'