- Filebeat Reference: other versions:
- Filebeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- Multiline messages
- AWS CloudWatch
- AWS S3
- Azure Event Hub
- Azure Blob Storage
- Benchmark
- CEL
- Cloud Foundry
- CometD
- Container
- Entity Analytics
- ETW
- filestream
- GCP Pub/Sub
- Google Cloud Storage
- HTTP Endpoint
- HTTP JSON
- journald
- Kafka
- Log
- MQTT
- NetFlow
- Office 365 Management Activity API
- Redis
- Salesforce
- Stdin
- Streaming
- Syslog
- TCP
- UDP
- Unix
- winlog
- Modules
- General settings
- Project paths
- Config file loading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- append
- cache
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- parse_aws_vpc_flow_log
- rate_limit
- registered_domain
- rename
- replace
- script
- syslog
- timestamp
- translate_ldap_attribute
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- Feature flags
- filebeat.reference.yml
- Inputs
- How to guides
- Override configuration settings
- Load the Elasticsearch index template
- Change the index name
- Load Kibana dashboards
- Load ingest pipelines
- Enrich events with geoIP information
- Deduplicate data
- Parse data using an ingest pipeline
- Use environment variables in the configuration
- Avoid YAML formatting problems
- Migrate
log
input configurations tofilestream
- Migrating from a Deprecated Filebeat Module
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- AWS Fargate module
- Azure module
- CEF module
- Check Point module
- Cisco module
- CoreDNS module
- CrowdStrike module
- Cyberark PAS module
- Elasticsearch module
- Envoyproxy Module
- Fortinet module
- Google Cloud module
- Google Workspace module
- HAproxy module
- IBM MQ module
- Icinga module
- IIS module
- Iptables module
- Juniper module
- Kafka module
- Kibana module
- Logstash module
- Microsoft module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- MySQL Enterprise module
- NATS module
- NetFlow module
- Nginx module
- Office 365 module
- Okta module
- Oracle module
- Osquery module
- Palo Alto Networks module
- pensando module
- PostgreSQL module
- RabbitMQ module
- Redis module
- Salesforce module
- Santa module
- Snyk module
- Sophos module
- Suricata module
- System module
- Threat Intel module
- Traefik module
- Zeek (Bro) Module
- ZooKeeper module
- Zoom module
- Exported fields
- ActiveMQ fields
- Apache fields
- Auditd fields
- AWS fields
- AWS CloudWatch fields
- AWS Fargate fields
- Azure fields
- Beat fields
- Decode CEF processor fields fields
- CEF fields
- Checkpoint fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Crowdstrike fields
- CyberArk PAS fields
- Docker fields
- ECS fields
- Elasticsearch fields
- Envoyproxy fields
- Fortinet fields
- Google Cloud Platform (GCP) fields
- google_workspace fields
- HAProxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Juniper JUNOS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- Lumberjack fields
- Microsoft fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- MySQL Enterprise fields
- NATS fields
- NetFlow fields
- Nginx fields
- Office 365 fields
- Okta fields
- Oracle fields
- Osquery fields
- panw fields
- Pensando fields
- PostgreSQL fields
- Process fields
- RabbitMQ fields
- Redis fields
- s3 fields
- Salesforce fields
- Google Santa fields
- Snyk fields
- sophos fields
- Suricata fields
- System fields
- threatintel fields
- Traefik fields
- Windows ETW fields
- Zeek fields
- ZooKeeper fields
- Zoom fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Understand logged metrics
- Common problems
- Error extracting container id while using Kubernetes metadata
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- High RSS memory usage due to MADV settings
- Contribute to Beats
Use Metricbeat to send monitoring data
editUse Metricbeat to send monitoring data
editIn 7.3 and later, you can use Metricbeat to collect data about Filebeat and ship it to the monitoring cluster. The benefit of using Metricbeat instead of internal collection is that the monitoring agent remains active even if the Filebeat instance dies.
To collect and ship monitoring data:
Configure the shipper you want to monitor
edit-
Enable the HTTP endpoint to allow external collection of monitoring data:
Add the following setting in the Filebeat configuration file (
filebeat.yml
):http.enabled: true
By default, metrics are exposed on port 5066. If you need to monitor multiple Beats shippers running on the same server, set
http.port
to expose metrics for each shipper on a different port number:http.port: 5067
-
Disable the default collection of Filebeat monitoring metrics.
Add the following setting in the Filebeat configuration file (
filebeat.yml
):monitoring.enabled: false
For more information, see Monitoring configuration options.
-
Configure host (optional).
If you intend to get metrics using Metricbeat installed on another server, you need to bind the Filebeat to host’s IP:
http.host: xxx.xxx.xxx.xxx
-
Configure cluster UUID.
The cluster UUID is necessary if you want to see Beats monitoring in the Kibana stack monitoring view. The monitoring data will be grouped under the cluster for that UUID. To associate Filebeat with the cluster UUID, set:
monitoring.cluster_uuid: "cluster-uuid"
- Start Filebeat.
Install and configure Metricbeat to collect monitoring data
edit- Install Metricbeat on the same server as Filebeat. To learn how, see Get started with Metricbeat. If you already have Metricbeat installed on the server, skip this step.
-
Enable the
beat-xpack
module in Metricbeat.For example, to enable the default configuration in the
modules.d
directory, run the following command, using the correct command syntax for your OS:metricbeat modules enable beat-xpack
For more information, see Configure modules and beat module.
-
Configure the
beat-xpack
module in Metricbeat.The
modules.d/beat-xpack.yml
file contains the following settings:- module: beat metricsets: - stats - state period: 10s hosts: ["http://localhost:5066"] #username: "user" #password: "secret" xpack.enabled: true
Set the
hosts
,username
, andpassword
settings as required by your environment. For other module settings, it’s recommended that you accept the defaults.By default, the module collects Filebeat monitoring data from
localhost:5066
. If you exposed the metrics on a different host or port when you enabled the HTTP endpoint, update thehosts
setting.To monitor multiple Beats agents, specify a list of hosts, for example:
hosts: ["http://localhost:5066","http://localhost:5067","http://localhost:5068"]
If you configured Filebeat to use encrypted communications, you must access it via HTTPS. For example, use a
hosts
setting likehttps://localhost:5066
.If the Elastic security features are enabled, you must also provide a user ID and password so that Metricbeat can collect metrics successfully:
-
Create a user on the Elasticsearch cluster that has the
remote_monitoring_collector
built-in role. Alternatively, if it’s available in your environment, use theremote_monitoring_user
built-in user. -
Add the
username
andpassword
settings to the beat module configuration file.
-
Create a user on the Elasticsearch cluster that has the
-
Optional: Disable the system module in the Metricbeat.
By default, the system module is enabled. The information it collects, however, is not shown on the Stack Monitoring page in Kibana. Unless you want to use that information for other purposes, run the following command:
metricbeat modules disable system
-
Identify where to send the monitoring data.
In production environments, we strongly recommend using a separate cluster (referred to as the monitoring cluster) to store the data. Using a separate monitoring cluster prevents production cluster outages from impacting your ability to access your monitoring data. It also prevents monitoring activities from impacting the performance of your production cluster.
For example, specify the Elasticsearch output information in the Metricbeat configuration file (
metricbeat.yml
):output.elasticsearch: # Array of hosts to connect to. hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"] # Optional protocol and basic auth credentials. #protocol: "https" #api_key: "id:api_key" #username: "elastic" #password: "changeme"
In this example, the data is stored on a monitoring cluster with nodes
es-mon-1
andes-mon-2
.Specify one of
api_key
orusername
/password
.If you configured the monitoring cluster to use encrypted communications, you must access it via HTTPS. For example, use a
hosts
setting likehttps://es-mon-1:9200
.The Elasticsearch monitoring features use ingest pipelines. The cluster that stores the monitoring data must have at least one node with the
ingest
role.If the Elasticsearch security features are enabled on the monitoring cluster, you must provide a valid user ID and password so that Metricbeat can send metrics successfully:
-
Create a user on the monitoring cluster that has the
remote_monitoring_agent
built-in role. Alternatively, if it’s available in your environment, use theremote_monitoring_user
built-in user.If you’re using index lifecycle management, the remote monitoring user requires additional privileges to create and read indices. For more information, see Grant users access to secured resources.
-
Add the
username
andpassword
settings to the Elasticsearch output information in the Metricbeat configuration file.
For more information about these configuration options, see Configure the Elasticsearch output.
-
- Start Metricbeat to begin collecting monitoring data.
- View the monitoring data in Kibana.
On this page