- Filebeat Reference: other versions:
- Filebeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- Multiline messages
- AWS CloudWatch
- AWS S3
- Azure Event Hub
- Azure Blob Storage
- Benchmark
- CEL
- Cloud Foundry
- CometD
- Container
- Entity Analytics
- ETW
- filestream
- GCP Pub/Sub
- Google Cloud Storage
- HTTP Endpoint
- HTTP JSON
- journald
- Kafka
- Log
- MQTT
- NetFlow
- Office 365 Management Activity API
- Redis
- Salesforce
- Stdin
- Streaming
- Syslog
- TCP
- UDP
- Unix
- winlog
- Modules
- General settings
- Project paths
- Config file loading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- append
- cache
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- parse_aws_vpc_flow_log
- rate_limit
- registered_domain
- rename
- replace
- script
- syslog
- timestamp
- translate_ldap_attribute
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- Feature flags
- filebeat.reference.yml
- Inputs
- How to guides
- Override configuration settings
- Load the Elasticsearch index template
- Change the index name
- Load Kibana dashboards
- Load ingest pipelines
- Enrich events with geoIP information
- Deduplicate data
- Parse data using an ingest pipeline
- Use environment variables in the configuration
- Avoid YAML formatting problems
- Migrate
log
input configurations tofilestream
- Migrating from a Deprecated Filebeat Module
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- AWS Fargate module
- Azure module
- CEF module
- Check Point module
- Cisco module
- CoreDNS module
- CrowdStrike module
- Cyberark PAS module
- Elasticsearch module
- Envoyproxy Module
- Fortinet module
- Google Cloud module
- Google Workspace module
- HAproxy module
- IBM MQ module
- Icinga module
- IIS module
- Iptables module
- Juniper module
- Kafka module
- Kibana module
- Logstash module
- Microsoft module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- MySQL Enterprise module
- NATS module
- NetFlow module
- Nginx module
- Office 365 module
- Okta module
- Oracle module
- Osquery module
- Palo Alto Networks module
- pensando module
- PostgreSQL module
- RabbitMQ module
- Redis module
- Salesforce module
- Santa module
- Snyk module
- Sophos module
- Suricata module
- System module
- Threat Intel module
- Traefik module
- Zeek (Bro) Module
- ZooKeeper module
- Zoom module
- Exported fields
- ActiveMQ fields
- Apache fields
- Auditd fields
- AWS fields
- AWS CloudWatch fields
- AWS Fargate fields
- Azure fields
- Beat fields
- Decode CEF processor fields fields
- CEF fields
- Checkpoint fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Crowdstrike fields
- CyberArk PAS fields
- Docker fields
- ECS fields
- Elasticsearch fields
- Envoyproxy fields
- Fortinet fields
- Google Cloud Platform (GCP) fields
- google_workspace fields
- HAProxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Juniper JUNOS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- Lumberjack fields
- Microsoft fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- MySQL Enterprise fields
- NATS fields
- NetFlow fields
- Nginx fields
- Office 365 fields
- Okta fields
- Oracle fields
- Osquery fields
- panw fields
- Pensando fields
- PostgreSQL fields
- Process fields
- RabbitMQ fields
- Redis fields
- s3 fields
- Salesforce fields
- Google Santa fields
- Snyk fields
- sophos fields
- Suricata fields
- System fields
- threatintel fields
- Traefik fields
- Windows ETW fields
- Zeek fields
- ZooKeeper fields
- Zoom fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Understand logged metrics
- Common problems
- Error extracting container id while using Kubernetes metadata
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- High RSS memory usage due to MADV settings
- Contribute to Beats
Configure an HTTP endpoint for metrics
editConfigure an HTTP endpoint for metrics
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
Filebeat can expose internal metrics through an HTTP endpoint. These are useful to monitor the internal state of the Beat. For security reasons the endpoint is disabled by default, as you may want to avoid exposing this info.
The HTTP endpoint has the following configuration settings:
-
http.enabled
-
(Optional) Enable the HTTP endpoint. Default is
false
. -
http.host
-
(Optional) Bind to this hostname, IP address, unix socket (unix:///var/run/filebeat.sock) or Windows named pipe (npipe:///filebeat).
It is recommended to use only localhost. Default is
localhost
-
http.port
-
(Optional) Port on which the HTTP endpoint will bind. Default is
5066
. -
http.named_pipe.user
- (Optional) User to use to create the named pipe, only work on Windows, Default to the current user.
-
http.named_pipe.security_descriptor
- (Optional) Windows Security descriptor string defined in the SDDL format. Default to read and write permission for the current user.
-
http.pprof.enabled
-
(Optional) Enable the
/debug/pprof/
endpoints when serving HTTP. It is recommended that this is only enabled on localhost as these endpoints may leak data. Default isfalse
. -
http.pprof.block_profile_rate
-
(Optional)
block_profile_rate
controls the fraction of goroutine blocking events that are reported in the blocking profile available from/debug/pprof/block
. The profiler aims to sample an average of one blocking event per rate nanoseconds spent blocked. To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate ⇐ 0. Defaults to 0. -
http.pprof.mem_profile_rate
-
(Optional)
mem_profile_rate
controls the fraction of memory allocations that are recorded and reported in the memory profile available from/debug/pprof/heap
. The profiler aims to sample an average of one allocation permem_profile_rate
bytes allocated. To include every allocated block in the profile, setmem_profile_rate
to 1. To turn off profiling entirely, setmem_profile_rate
to 0. Defaults to 524288. -
http.pprof.mutex_profile_rate
-
(Optional)
mutex_profile_rate
controls the fraction of mutex contention events that are reported in the mutex profile available from/debug/pprof/mutex
. On average 1/rate events are reported. To turn off profiling entirely, pass rate 0. The default value is 0.
This is the list of paths you can access. For pretty JSON output append ?pretty
to the URL.
You can query a unix socket using the cURL
command and the --unix-socket
flag.
curl -XGET --unix-socket '/var/run/{beatname_lc}.sock' 'http:/stats/?pretty'
Info
edit/
provides basic info from the Filebeat. Example:
curl -XGET 'localhost:5066/?pretty'
{ "beat": "filebeat", "hostname": "example.lan", "name": "example.lan", "uuid": "34f6c6e1-45a8-4b12-9125-11b3e6e89866", "version": "8.17.0" }
Stats
edit/stats
reports internal metrics. Example:
curl -XGET 'localhost:5066/stats?pretty'
{ "beat": { "cpu": { "system": { "ticks": 1710, "time": { "ms": 1712 } }, "total": { "ticks": 3420, "time": { "ms": 3424 }, "value": 3420 }, "user": { "ticks": 1710, "time": { "ms": 1712 } } }, "info": { "ephemeral_id": "ab4287c4-d907-4d9d-b074-d8c3cec4a577", "uptime": { "ms": 195547 } }, "memstats": { "gc_next": 17855152, "memory_alloc": 9433384, "memory_total": 492478864, "rss": 50405376 }, "runtime": { "goroutines": 22 } }, "libbeat": { "config": { "module": { "running": 0, "starts": 0, "stops": 0 }, "scans": 1, "reloads": 1 }, "output": { "events": { "acked": 0, "active": 0, "batches": 0, "dropped": 0, "duplicates": 0, "failed": 0, "total": 0 }, "read": { "bytes": 0, "errors": 0 }, "type": "elasticsearch", "write": { "bytes": 0, "errors": 0 } }, "pipeline": { "clients": 6, "events": { "active": 716, "dropped": 0, "failed": 0, "filtered": 0, "published": 716, "retry": 278, "total": 716 }, "queue": { "acked": 0 } } }, "system": { "cpu": { "cores": 4 }, "load": { "1": 2.22, "15": 1.8, "5": 1.74, "norm": { "1": 0.555, "15": 0.45, "5": 0.435 } } } }
The actual output may contain more metrics specific to Filebeat
Inputs
edit/inputs/
returns metrics related to input instances. It returns a list of
objects where each object contains metrics for an instance of an input. Each
object will minimally contain an input
field that identifies the type of input
(e.g. aws-s3
) and an id
field that is the unique identifier for the input
instance.
A request may optionally specify a type
query parameter to request metrics
for a specific type of input. And pretty
may be included to have the
returned JSON be pretty formatted.
curl 'http://localhost:5066/inputs/' curl 'http://localhost:5066/inputs/?pretty' curl 'http://localhost:5066/inputs/?type=aws-s3&pretty'