This documentation contains work-in-progress information for future Elastic Stack and Cloud releases. Use the version selector to view supported release docs. It also contains some Elastic Cloud serverless information. Check out our serverless docs for more details.
Queues and data resiliency
editQueues and data resiliency
editBy default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events.
As data flows through the event processing pipeline, Logstash may encounter situations that prevent it from delivering events to the configured output. For example, the data might contain unexpected data types, or Logstash might terminate abnormally.
To guard against data loss and ensure that events flow through the pipeline without interruption, Logstash provides data resiliency features.
- Persistent queues (PQ) protect against data loss by storing events in an internal queue on disk.
-
Dead letter queues (DLQ) provide on-disk storage for events that Logstash is unable to process so that you can evaluate them.
You can easily reprocess events in the dead letter queue by using the
dead_letter_queue
input plugin.
These resiliency features are disabled by default. To turn on these features, you must explicitly enable them in the Logstash settings file.