- Filebeat Reference: other versions:
- Overview
- Get started
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- General settings
- Project paths
- Config file loading
- Output
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_json_fields
- decompress_gzip_field
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- registered_domain
- rename
- script
- timestamp
- truncate_fields
- Autodiscover
- Internal queue
- Load balancing
- Logging
- HTTP endpoint
- Regular expression support
- filebeat.reference.yml
- How to guides
- Beats central management
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- Azure module
- CEF module
- Cisco module
- CoreDNS module
- Elasticsearch module
- Envoyproxy Module
- Google Cloud module
- haproxy module
- IBM MQ module
- Icinga module
- IIS module
- Iptables module
- Kafka module
- Kibana module
- Logstash module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- nats module
- NetFlow module
- Nginx module
- Osquery module
- Palo Alto Networks module
- PostgreSQL module
- RabbitMQ module
- Redis module
- Santa module
- Suricata module
- System module
- Traefik module
- Zeek (Bro) Module
- Exported fields
- activemq fields
- Apache fields
- Auditd fields
- AWS fields
- Azure fields
- Beat fields
- Decode CEF processor fields fields
- CEF fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Docker fields
- ECS fields
- elasticsearch fields
- Envoyproxy fields
- Google Cloud fields
- haproxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- nats fields
- NetFlow fields
- NetFlow fields
- Nginx fields
- Osquery fields
- panw fields
- PostgreSQL fields
- Process fields
- RabbitMQ fields
- Redis fields
- s3 fields
- Google Santa fields
- Suricata fields
- System fields
- Traefik fields
- Zeek fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Common problems
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Contribute to Beats
Step 4: Set up the Kibana dashboards
editStep 4: Set up the Kibana dashboards
editFor deeper observability into your infrastructure, you can use the Metrics app and the Logs app in Kibana. For more details, see the Metrics Monitoring Guide and the Logs Monitoring Guide.
Filebeat comes packaged with example Kibana dashboards, visualizations,
and searches for visualizing Filebeat data in Kibana. Before you can use
the dashboards, you need to create the index pattern, filebeat-*
, and
load the dashboards into Kibana. To do this, you can either run the setup
command (as described here) or
configure dashboard loading in the
filebeat.yml
config file.
This requires a Kibana endpoint configuration. If you didn’t already configure a Kibana endpoint, see configure Filebeat.
Make sure Kibana is running before you perform this step. If you are accessing a secured Kibana instance, make sure you’ve configured credentials as described in Step 2: Configure Filebeat.
To set up the Kibana dashboards for Filebeat, use the appropriate command for your system. The command shown here loads the dashboards from the Filebeat package. For more options, such as loading customized dashboards, see Importing Existing Beat Dashboards in the Beats Developer Guide. If you’ve configured the Logstash output, see Set up dashboards for Logstash output.
deb and rpm:
filebeat setup --dashboards
mac:
./filebeat setup --dashboards
brew:
filebeat setup --dashboards
linux:
./filebeat setup --dashboards
docker:
docker run --net="host" docker.elastic.co/beats/filebeat:7.6.2 setup --dashboards
win:
Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select Run As Administrator).
From the PowerShell prompt, change to the directory where you installed Filebeat, and run:
PS > .\filebeat.exe setup --dashboards
Set up dashboards for Logstash output
editDuring dashboard loading, Filebeat connects to Elasticsearch to check version information. To load dashboards when the Logstash output is enabled, you need to temporarily disable the Logstash output and enable Elasticsearch. To connect to a secured Elasticsearch cluster, you also need to pass Elasticsearch credentials.
The example shows a hard-coded password, but you should store sensitive values in the secrets keystore.
deb and rpm:
filebeat setup -e \ -E output.logstash.enabled=false \ -E output.elasticsearch.hosts=['localhost:9200'] \ -E output.elasticsearch.username=filebeat_internal \ -E output.elasticsearch.password=YOUR_PASSWORD \ -E setup.kibana.host=localhost:5601
mac:
./filebeat setup -e \ -E output.logstash.enabled=false \ -E output.elasticsearch.hosts=['localhost:9200'] \ -E output.elasticsearch.username=filebeat_internal \ -E output.elasticsearch.password=YOUR_PASSWORD \ -E setup.kibana.host=localhost:5601
brew:
filebeat setup -e \ -E output.logstash.enabled=false \ -E output.elasticsearch.hosts=['localhost:9200'] \ -E output.elasticsearch.username=filebeat_internal \ -E output.elasticsearch.password=YOUR_PASSWORD \ -E setup.kibana.host=localhost:5601
linux:
./filebeat setup -e \ -E output.logstash.enabled=false \ -E output.elasticsearch.hosts=['localhost:9200'] \ -E output.elasticsearch.username=filebeat_internal \ -E output.elasticsearch.password=YOUR_PASSWORD \ -E setup.kibana.host=localhost:5601
docker:
docker run --net="host" docker.elastic.co/beats/filebeat:7.6.2 setup -e \ -E output.logstash.enabled=false \ -E output.elasticsearch.hosts=['localhost:9200'] \ -E output.elasticsearch.username=filebeat_internal \ -E output.elasticsearch.password=YOUR_PASSWORD \ -E setup.kibana.host=localhost:5601
win:
Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select Run As Administrator).
From the PowerShell prompt, change to the directory where you installed Filebeat, and run:
PS > .\filebeat.exe setup -e ` -E output.logstash.enabled=false ` -E output.elasticsearch.hosts=['localhost:9200'] ` -E output.elasticsearch.username=filebeat_internal ` -E output.elasticsearch.password=YOUR_PASSWORD ` -E setup.kibana.host=localhost:5601
On this page