- Filebeat Reference: other versions:
- Filebeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- Multiline messages
- AWS CloudWatch
- AWS S3
- Azure Event Hub
- Azure Blob Storage
- Benchmark
- CEL
- Cloud Foundry
- CometD
- Container
- Entity Analytics
- ETW
- filestream
- GCP Pub/Sub
- Google Cloud Storage
- HTTP Endpoint
- HTTP JSON
- journald
- Kafka
- Log
- MQTT
- NetFlow
- Office 365 Management Activity API
- Redis
- Salesforce
- Stdin
- Streaming
- Syslog
- TCP
- UDP
- Unix
- winlog
- Modules
- General settings
- Project paths
- Config file loading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- append
- cache
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- parse_aws_vpc_flow_log
- rate_limit
- registered_domain
- rename
- replace
- script
- syslog
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- Feature flags
- filebeat.reference.yml
- Inputs
- How to guides
- Override configuration settings
- Load the Elasticsearch index template
- Change the index name
- Load Kibana dashboards
- Load ingest pipelines
- Enrich events with geoIP information
- Deduplicate data
- Parse data using an ingest pipeline
- Use environment variables in the configuration
- Avoid YAML formatting problems
- Migrate
log
input configurations tofilestream
- Migrating from a Deprecated Filebeat Module
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- AWS Fargate module
- Azure module
- CEF module
- Check Point module
- Cisco module
- CoreDNS module
- CrowdStrike module
- Cyberark PAS module
- Elasticsearch module
- Envoyproxy Module
- Fortinet module
- Google Cloud module
- Google Workspace module
- HAproxy module
- IBM MQ module
- Icinga module
- IIS module
- Iptables module
- Juniper module
- Kafka module
- Kibana module
- Logstash module
- Microsoft module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- MySQL Enterprise module
- NATS module
- NetFlow module
- Nginx module
- Office 365 module
- Okta module
- Oracle module
- Osquery module
- Palo Alto Networks module
- pensando module
- PostgreSQL module
- RabbitMQ module
- Redis module
- Salesforce module
- Santa module
- Snyk module
- Sophos module
- Suricata module
- System module
- Threat Intel module
- Traefik module
- Zeek (Bro) Module
- ZooKeeper module
- Zoom module
- Exported fields
- ActiveMQ fields
- Apache fields
- Auditd fields
- AWS fields
- AWS CloudWatch fields
- AWS Fargate fields
- Azure fields
- Beat fields
- Decode CEF processor fields fields
- CEF fields
- Checkpoint fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Crowdstrike fields
- CyberArk PAS fields
- Docker fields
- ECS fields
- Elasticsearch fields
- Envoyproxy fields
- Fortinet fields
- Google Cloud Platform (GCP) fields
- google_workspace fields
- HAProxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Juniper JUNOS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- Lumberjack fields
- Microsoft fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- MySQL Enterprise fields
- NATS fields
- NetFlow fields
- Nginx fields
- Office 365 fields
- Okta fields
- Oracle fields
- Osquery fields
- panw fields
- Pensando fields
- PostgreSQL fields
- Process fields
- RabbitMQ fields
- Redis fields
- s3 fields
- Salesforce fields
- Google Santa fields
- Snyk fields
- sophos fields
- Suricata fields
- System fields
- threatintel fields
- Traefik fields
- Windows ETW fields
- Zeek fields
- ZooKeeper fields
- Zoom fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Understand logged metrics
- Common problems
- Error extracting container id while using Kubernetes metadata
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- High RSS memory usage due to MADV settings
- Contribute to Beats
Configure Kibana dashboard loading
editConfigure Kibana dashboard loading
editFilebeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Filebeat data in Kibana.
To load the dashboards, you can either enable dashboard loading in the
setup.dashboards
section of the filebeat.yml
config file, or you can
run the setup
command. Dashboard loading is disabled by default.
When dashboard loading is enabled, Filebeat uses the Kibana API to load the sample dashboards. Dashboard loading is only attempted when Filebeat starts up. If Kibana is not available at startup, Filebeat will stop with an error.
To enable dashboard loading, add the following setting to the config file:
setup.dashboards.enabled: true
Configuration options
editYou can specify the following options in the setup.dashboards
section of the
filebeat.yml
config file:
setup.dashboards.enabled
editIf this option is set to true, Filebeat loads the sample Kibana dashboards
from the local kibana
directory in the home path of the Filebeat installation.
Filebeat loads dashboards on startup if either enabled
is set to true
or the setup.dashboards
section is included in the configuration.
When dashboard loading is enabled, Filebeat overwrites any existing dashboards that match the names of the dashboards you are loading. This happens every time Filebeat starts.
If no other options are set, the dashboard are loaded
from the local kibana
directory in the home path of the Filebeat installation.
To load dashboards from a different location, you can configure one of the
following options: setup.dashboards.directory
,
setup.dashboards.url
, or
setup.dashboards.file
.
setup.dashboards.directory
editThe directory that contains the dashboards to load. The default is the kibana
folder in the home path.
setup.dashboards.url
editThe URL to use for downloading the dashboard archive. If this option is set, Filebeat downloads the dashboard archive from the specified URL instead of using the local directory.
setup.dashboards.file
editThe file archive (zip file) that contains the dashboards to load. If this option is set, Filebeat looks for a dashboard archive in the specified path instead of using the local directory.
setup.dashboards.beat
editIn case the archive contains the dashboards for multiple Beats, this setting
lets you select the Beat for which you want to load dashboards. To load all the
dashboards in the archive, set this option to an empty string. The default is
"filebeat"
.
setup.dashboards.kibana_index
editThe name of the Kibana index to use for setting the configuration. The default
is ".kibana"
setup.dashboards.index
editThe Elasticsearch index name. This setting overwrites the index name defined
in the dashboards and index pattern. Example: "testbeat-*"
This setting only works for Kibana 6.0 and newer.
setup.dashboards.always_kibana
editForce loading of dashboards using the Kibana API without querying Elasticsearch for the version.
The default is false
.
setup.dashboards.retry.enabled
editIf this option is set to true, and Kibana is not reachable at the time when dashboards are loaded, Filebeat will retry to reconnect to Kibana instead of exiting with an error. Disabled by default.
setup.dashboards.retry.interval
editDuration interval between Kibana connection retries. Defaults to 1 second.
setup.dashboards.retry.maximum
editMaximum number of retries before exiting with an error. Set to 0 for unlimited retrying. Default is unlimited.
setup.dashboards.string_replacements
editThe needle and replacements string map, which is used to replace needle string in dashboards and their references contents.
On this page
- Configuration options
setup.dashboards.enabled
setup.dashboards.directory
setup.dashboards.url
setup.dashboards.file
setup.dashboards.beat
setup.dashboards.kibana_index
setup.dashboards.index
setup.dashboards.always_kibana
setup.dashboards.retry.enabled
setup.dashboards.retry.interval
setup.dashboards.retry.maximum
setup.dashboards.string_replacements