- Filebeat Reference: other versions:
- Filebeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- Modules
- General settings
- Project paths
- Config file loading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_json_fields
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- rate_limit
- registered_domain
- rename
- script
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Load balancing
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- filebeat.reference.yml
- How to guides
- Override configuration settings
- Load the Elasticsearch index template
- Change the index name
- Load Kibana dashboards
- Load ingest pipelines
- Enrich events with geoIP information
- Deduplicate data
- Parse data by using ingest node
- Use environment variables in the configuration
- Avoid YAML formatting problems
- Beats central management
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- Azure module
- Barracuda module
- Bluecoat module
- CEF module
- Check Point module
- Cisco module
- CoreDNS module
- Crowdstrike module
- Cyberark module
- Cylance module
- Elasticsearch module
- Envoyproxy Module
- F5 module
- Fortinet module
- Google Cloud module
- Google Workspace module
- GSuite module
- haproxy module
- IBM MQ module
- Icinga module
- IIS module
- Imperva module
- Infoblox module
- Iptables module
- Juniper module
- Kafka module
- Kibana module
- Logstash module
- Microsoft module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- MySQL Enterprise module
- nats module
- NetFlow module
- Netscout module
- Nginx module
- Office 365 module
- Okta module
- Oracle module
- Osquery module
- Palo Alto Networks module
- PostgreSQL module
- Proofpoint module
- RabbitMQ module
- Radware module
- Redis module
- Santa module
- Snort module
- Snyk module
- Sonicwall module
- Sophos module
- Squid module
- Suricata module
- System module
- Tomcat module
- Traefik module
- Zeek (Bro) Module
- Zoom module
- Zscaler module
- Exported fields
- ActiveMQ fields
- Apache fields
- Auditd fields
- AWS fields
- aws-cloudwatch fields
- Azure fields
- Barracuda Web Application Firewall fields
- Beat fields
- Blue Coat Director fields
- Decode CEF processor fields fields
- CEF fields
- Checkpoint fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Crowdstrike fields
- Cyber-Ark fields
- CylanceProtect fields
- Docker fields
- ECS fields
- Elasticsearch fields
- Envoyproxy fields
- Big-IP Access Policy Manager fields
- Fortinet fields
- Google Cloud Platform (GCP) fields
- google_workspace fields
- gsuite fields
- HAProxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- Imperva SecureSphere fields
- Infoblox NIOS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Juniper JUNOS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- Microsoft fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- MySQL Enterprise fields
- NATS fields
- NetFlow fields
- Arbor Peakflow SP fields
- Nginx fields
- Office 365 fields
- Okta fields
- Oracle fields
- Osquery fields
- panw fields
- PostgreSQL fields
- Process fields
- Proofpoint Email Security fields
- RabbitMQ fields
- Radware DefensePro fields
- Redis fields
- s3 fields
- Google Santa fields
- Snort/Sourcefire fields
- Snyk fields
- Sonicwall-FW fields
- sophos fields
- Squid fields
- Suricata fields
- System fields
- Apache Tomcat fields
- Traefik fields
- Zeek fields
- Zoom fields
- Zscaler NSS fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Common problems
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- Contribute to Beats
Deduplicate data
editDeduplicate data
editThe Beats framework guarantees at-least-once delivery to ensure that no data is lost when events are sent to outputs that support acknowledgement, such as Elasticsearch, Logstash, Kafka, and Redis. This is great if everything goes as planned. But if Filebeat shuts down during processing, or the connection is lost before events are acknowledged, you can end up with duplicate data.
What causes duplicates in Elasticsearch?
editWhen an output is blocked, the retry mechanism in Filebeat attempts to resend events until they are acknowledged by the output. If the output receives the events, but is unable to acknowledge them, the data might be sent to the output multiple times. Because document IDs are typically set by Elasticsearch after it receives the data from Beats, the duplicate events are indexed as new documents.
How can I avoid duplicates?
editRather than allowing Elasticsearch to set the document ID, set the ID in Beats. The ID
is stored in the Beats @metadata._id
field and used to set the document ID
during indexing. That way, if Beats sends the same event to Elasticsearch more than
once, Elasticsearch overwrites the existing document rather than creating a new one.
The @metadata._id
field is passed along with the event so that you can use
it to set the document ID after the event has been published by Filebeat
but before it’s received by Elasticsearch. For example, see Logstash pipeline example.
There are several ways to set the document ID in Beats:
-
add_id
processorUse the
add_id
processor when your data has no natural key field, and you can’t derive a unique key from existing fields.This example generates a unique ID for each event and adds it to the
@metadata._id
field:processors: - add_id: ~
-
fingerprint
processorUse the
fingerprint
processor to derive a unique key from one or more existing fields.This example uses the values of
field1
andfield2
to derive a unique key that it adds to the@metadata._id
field:processors: - fingerprint: fields: ["field1", "field2"] target_field: "@metadata._id"
-
decode_json_fields
processorUse the
document_id
setting in thedecode_json_fields
processor when you’re decoding a JSON string that contains a natural key field.For this example, assume that the
message
field contains the JSON string{"myid": "100", "text": "Some text"}
. This example takes the value ofmyid
from the JSON string and stores it in the@metadata._id
field:processors: - decode_json_fields: document_id: "myid" fields: ["message"] max_depth: 1 target: ""
The resulting document ID is
100
. -
JSON input settings
Use the
json.document_id
input setting if you’re ingesting JSON-formatted data, and the data has a natural key field.This example takes the value of
key1
from the JSON document and stores it in the@metadata._id
field:filebeat.inputs: - type: log paths: - /path/to/json.log json.document_id: "key1"
Logstash pipeline example
editFor this example, assume that you’ve used one of the approaches described
earlier to store the document ID in the Beats @metadata._id
field. To
preserve the ID when you send Beats data through Logstash en route to Elasticsearch,
set the document_id
field in the Logstash pipeline:
input { beats { port => 5044 } }} output { if [@metadata][_id] { elasticsearch { hosts => ["http://localhost:9200"] document_id => "%{[@metadata][_id]}" index => "%{[@metadata][beat]}-%{[@metadata][version]}" } } else { elasticsearch { hosts => ["http://localhost:9200"] index => "%{[@metadata][beat]}-%{[@metadata][version]}" } } }
Sets the |
When Elasticsearch indexes the document, it sets the document ID to the specified value, preserving the ID passed from Beats.
On this page