- Filebeat Reference: other versions:
- Filebeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- Modules
- General settings
- Project paths
- Config file loading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_json_fields
- decompress_gzip_field
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- registered_domain
- rename
- script
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Load balancing
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- filebeat.reference.yml
- How to guides
- Override configuration settings
- Load the Elasticsearch index template
- Change the index name
- Load Kibana dashboards
- Load ingest pipelines
- Enrich events with geoIP information
- Deduplicate data
- Parse data by using ingest node
- Use environment variables in the configuration
- Avoid YAML formatting problems
- Beats central management
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- Azure module
- Barracuda module
- Bluecoat module
- CEF module
- Check Point module
- Cisco module
- Citrix module
- CoreDNS module
- Crowdstrike module
- Cyberark module
- Cylance module
- Elasticsearch module
- Envoyproxy Module
- F5 module
- Fortinet module
- Google Cloud module
- GSuite module
- haproxy module
- IBM MQ module
- Icinga module
- IIS module
- Imperva module
- Infoblox module
- Iptables module
- Juniper module
- Kafka module
- Kibana module
- Logstash module
- Microsoft module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- nats module
- NetFlow module
- Netscout module
- Nginx module
- Office 365 module
- Okta module
- Osquery module
- Palo Alto Networks module
- PostgreSQL module
- Proofpoint module
- RabbitMQ module
- Radware module
- Redis module
- Santa module
- Snort module
- Sonicwall module
- Sophos module
- Squid module
- Suricata module
- Symantec module
- System module
- Tomcat module
- Traefik module
- Zeek (Bro) Module
- Zoom module
- Zscaler module
- Exported fields
- ActiveMQ fields
- Apache fields
- Auditd fields
- AWS fields
- awscloudwatch fields
- Azure fields
- Barracuda Web Application Firewall fields
- Beat fields
- Blue Coat Director fields
- Decode CEF processor fields fields
- CEF fields
- Checkpoint fields
- Cisco fields
- Citrix NetScaler fields
- Cloud provider metadata fields
- Coredns fields
- Crowdstrike fields
- Cyber-Ark fields
- CylanceProtect fields
- Docker fields
- ECS fields
- Elasticsearch fields
- Envoyproxy fields
- Big-IP Access Policy Manager fields
- Fortinet fields
- Google Cloud fields
- gsuite fields
- HAProxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- Imperva SecureSphere fields
- Infoblox NIOS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Juniper JUNOS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- Microsoft fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- NATS fields
- NetFlow fields
- Arbor Peakflow SP fields
- Nginx fields
- Office 365 fields
- Okta fields
- Osquery fields
- panw fields
- PostgreSQL fields
- Process fields
- Proofpoint Email Security fields
- RabbitMQ fields
- Radware DefensePro fields
- Redis fields
- s3 fields
- Google Santa fields
- Snort/Sourcefire fields
- Sonicwall-FW fields
- sophos fields
- Squid fields
- Suricata fields
- Symantec AntiVirus/Endpoint Protection fields
- System fields
- Apache Tomcat fields
- Traefik fields
- Zeek fields
- Zoom fields
- Zscaler NSS fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Common problems
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- Contribute to Beats
Run Filebeat on Kubernetes
editRun Filebeat on Kubernetes
editYou can use Filebeat Docker images on Kubernetes to retrieve and ship container logs.
Running Elastic Cloud on Kubernetes? See Run Beats on ECK.
Kubernetes deploy manifests
editYou deploy Filebeat as a DaemonSet to ensure there’s a running instance on each node of the cluster.
The Docker logs host folder (/var/lib/docker/containers
) is mounted on the
Filebeat container. Filebeat starts an input for the files and
begins harvesting them as soon as they appear in the folder.
Everything is deployed under the kube-system
namespace by default. To change
the namespace, modify the manifest file.
To download the manifest file, run:
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10/deploy/kubernetes/filebeat-kubernetes.yaml
If you are using Kubernetes 1.7 or earlier: Filebeat uses a hostPath volume to persist internal data. It’s located
under /var/lib/filebeat-data
. The manifest uses folder autocreation (DirectoryOrCreate
), which was introduced in
Kubernetes 1.8. You need to remove type: DirectoryOrCreate
from the manifest and create the host folder yourself.
Settings
editBy default, Filebeat sends events to an existing Elasticsearch deployment, if present. To specify a different destination, change the following parameters in the manifest file:
- name: ELASTICSEARCH_HOST value: elasticsearch - name: ELASTICSEARCH_PORT value: "9200" - name: ELASTICSEARCH_USERNAME value: elastic - name: ELASTICSEARCH_PASSWORD value: changeme
Red Hat OpenShift configuration
editIf you are using Red Hat OpenShift, you need to specify additional settings in the manifest file and enable the container to run as privileged.
-
Modify the
DaemonSet
container spec in the manifest file:securityContext: runAsUser: 0 privileged: true
-
Grant the
filebeat
service account access to the privileged SCC:oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:filebeat
This command enables the container to be privileged as an administrator for OpenShift.
-
Override the default node selector for the
kube-system
namespace (or your custom namespace) to allow for scheduling on any node:oc patch namespace kube-system -p \ '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}'
This command sets the node selector for the project to an empty string. If you don’t run this command, the default node selector will skip master nodes.
Load Kibana dashboards
editFilebeat comes packaged with various pre-built Kibana dashboards that you can use to visualize logs from your Kubernetes environment.
If these dashboards are not already loaded into Kibana, you must install Filebeat
on any system that can connect to the Elastic Stack, and then run the setup
command to load the dashboards.
To learn how, see Load Kibana dashboards.
The setup
command does not load the ingest pipelines used to parse log lines. By default, ingest pipelines
are set up automatically the first time you run Filebeat and connect to Elasticsearch.
If you are using a different output other than Elasticsearch, such as Logstash, you need to:
Deploy
editTo deploy Filebeat to Kubernetes, run:
kubectl create -f filebeat-kubernetes.yaml
To check the status, run:
$ kubectl --namespace=kube-system get ds/filebeat NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE filebeat 32 32 0 32 0 <none> 1m
Log events should start flowing to Elasticsearch. The events are annotated with metadata added by the add_kubernetes_metadata processor.
On this page
ElasticON events are back!
Learn about the Elastic Search AI Platform from the experts at our live events.
Register now