Configure the internal queue
editConfigure the internal queue
editPacketbeat uses an internal queue to store events before publishing them. The queue is responsible for buffering and combining events into batches that can be consumed by the outputs. The outputs will use bulk operations to send a batch of events in one transaction.
You can configure the type and behavior of the internal queue by setting
options in the queue
section of the packetbeat.yml
config file. Only one
queue type can be configured.
This sample configuration sets the memory queue to buffer up to 4096 events:
queue.mem: events: 4096
Configure the memory queue
editThe memory queue keeps all events in memory.
If no flush interval and no number of events to flush is configured,
all events published to this queue will be directly consumed by the outputs.
To enforce spooling in the queue, set the flush.min_events
and flush.timeout
options.
By default flush.min.events
is set to 2048 and flush.timeout
is set to 1s.
The output’s bulk_max_size
setting limits the number of events being processed at once.
The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted.
This sample configuration forwards events to the output if 512 events are available or the oldest available event has been waiting for 5s in the queue:
queue.mem: events: 4096 flush.min_events: 512 flush.timeout: 5s
Configuration options
editYou can specify the following options in the queue.mem
section of the packetbeat.yml
config file:
events
editNumber of events the queue can store.
The default value is 4096 events.
flush.min_events
editMinimum number of events required for publishing. If this value is set to 0, the output can start publishing events without additional waiting times. Otherwise the output has to wait for more events to become available.
The default value is 2048.
flush.timeout
editMaximum wait time for flush.min_events
to be fulfilled. If set to 0s, events
will be immediately available for consumption.
The default value is 1s.
Configure the file spool queue
editThis functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
The file spool queue stores all events in an on disk ring buffer. The spool has a write buffer, which new events are written to. Events written to the spool are forwarded to the outputs, only after the write buffer has been flushed successfully.
The spool waits for the output to acknowledge or drop events. If the spool is full, no new events can be inserted. The spool will block. Space is freed only after a signal from the output has been received.
On disk, the spool divides a file into pages. The file.page_size
setting
configures the file’s page size at file creation time. The optimal page size depends
on the effective block size, used by the underlying file system.
This sample configuration enables the spool with all default settings (See Configuration options for defaults) and the default file path:
queue.spool: ~
This sample configuration creates a spool of 512MiB, with 16KiB pages. The write buffer is flushed if 10MiB of contents, or 1024 events have been written. If the oldest available event has been waiting for 5s in the write buffer, the buffer will be flushed as well:
queue.spool: file: path: "${path.data}/spool.dat" size: 512MiB page_size: 16KiB write: buffer_size: 10MiB flush.timeout: 5s flush.events: 1024
Configuration options
editYou can specify the following options in the queue.spool
section of the
packetbeat.yml
config file:
file.path
editThe spool file path. The file is created on startup, if it does not exist.
The default value is "${path.data}/spool.dat".
file.permissions
editThe file permissions. The permissions are applied when the file is
created. In case the file already exists, the file permissions are compared
with file.permissions
. The spool file is not opened if the actual file
permissions are more permissive then configured.
The default value is 0600.
file.size
editSpool file size.
The default value is 100 MiB.
The size should be much larger then the expected event sizes and write buffer size. Otherwise the queue will block, because it has not enough space.
The file size cannot be changed once the file has been generated. This limitation will be removed in the future.
file.page_size
editThe file’s page size.
The spool file is split into pages of page_size
. All I/O
operations operate on complete pages.
The default value is 4096 (4KiB).
This setting should match the file system’s minimum block size. If the
page_size
is not a multiple of the file system’s block size, the file system
might create additional read operations on writes.
The page size is only set at file creation time. It cannot be changed afterwards.
file.prealloc
editIf prealloc
is set to true
, truncate is used to reserve the space up to
file.size
. This setting is only used when the file is created.
The file will dynamically grow, if prealloc
is set to false. The spool
blocks, if prealloc
is false
and the system is out of disk space.
The default value is true
.
write.buffer_size
editThe write buffer size. The write buffer is flushed, once the buffer size is exceeded.
Very big events are allowed to be bigger then the configured buffer size. But the write buffer will be flushed right after the event has been serialized.
The default value is 1MiB.
write.codec
editThe event encoding used for serialized events. Valid values are json
and cbor
.
The default value is cbor
.
write.flush.timeout
editMaximum wait time of the oldest event in the write buffer. If set to 0, the
write buffer will only be flushed once write.flush.events
or write.buffer_size
is fulfilled.
The default value is 1s.
write.flush.events
editNumber of buffered events. The write buffer is flushed once the limit is reached.
The default value is 16384.
read.flush.timeout
editThe spool reader tries to read up to the output’s bulk_max_size
events at once.
If read.flush.timeout
is set to 0s, all available events are forwarded
immediately to the output.
If read.flush.timeout
is set to a value bigger then 0s, the spool will wait
for more events to be flushed. Events are forwarded to the output if
bulk_max_size
events have been read or the oldest read event has been waiting
for the configured duration.
The default value is 0s.