WARNING: Version 5.2 of Filebeat has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Logstash Output Configuration
editLogstash Output Configuration
editThe Logstash output sends the events directly to Logstash by using the lumberjack protocol, which runs over TCP. To use this option, you must install and configure the Beats input plugin for Logstash. Logstash allows for additional processing and routing of generated events.
Every event sent to Logstash contains additional metadata for indexing and filtering:
{ ... "@metadata": { "beat": "<beat>", "type": "<event type>" } }
In Logstash, you can configure the Elasticsearch output plugin to use the metadata and event type for indexing.
The following Logstash configuration file for the versions 2.x and 5.x sets Logstash to
use the index and document type reported by Beats for indexing events into Elasticsearch.
The index used will depend on the @timestamp
field as identified by Logstash.
input { beats { port => 5044 } } output { elasticsearch { hosts => ["http://localhost:9200"] index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } }
Events indexed into Elasticsearch with the Logstash configuration shown here will be similar to events directly indexed by Beats into Elasticsearch.
Here is an example of how to configure the Beat to use Logstash:
output.logstash: hosts: ["localhost:5044"] index: filebeat
Compatibility
editThis output works with all compatible versions of Logstash. See "Supported Beats Versions" in the Elastic Support Matrix.
Logstash Output Options
editYou can specify the following options in the logstash
section of the
filebeat.yml
config file:
enabled
editThe enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled.
The default value is true.
hosts
editThe list of known Logstash servers to connect to. All entries in this list can contain a port number. If no port number is given, the value specified for port is used as the default port number.
compression_level
editThe gzip compression level. Setting this value to 0 disables compression. The compression level must be in the range of 1 (best speed) to 9 (best compression).
Increasing the compression level will reduce the network usage but will increase the cpu usage.
The default value is 3.
worker
editThe number of workers per configured host publishing events to Logstash. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host).
loadbalance
editIf set to true and multiple Logstash hosts are configured, the output plugin load balances published events onto all Logstash hosts. If set to false, the output plugin sends all events to only one host (determined at random) and will switch to another host if the selected one becomes unresponsive. The default value is false.
output.logstash: hosts: ["localhost:5044", "localhost:5045"] loadbalance: true index: filebeat
pipelining
editConfigures number of batches to be send asynchronously to logstash while waiting
for ACK from logstash. Output only becomes blocking once number of pipelining
batches have been written. Pipelining is disabled if a values of 0 is
configured. The default value is 0.
port
editDeprecated in 5.0.0.
The default port to use if the port number is not given in hosts. The default port number is 10200.
proxy_url
editThe URL of the SOCKS5 proxy to use when connecting to the Logstash servers. The
value must be a URL with a scheme of socks5://
. The protocol used to
communicate to Logstash is not based on HTTP so a web-proxy cannot be used.
If the SOCKS5 proxy server requires client authentication, then a username and password can be embedded in the URL as shown in the example.
When using a proxy, hostnames are resolved on the proxy server instead of on the client. You can change this behavior by setting the proxy_use_local_resolver option.
output.logstash: hosts: ["remote-host:5044"] proxy_url: socks5://user:password@socks5-proxy:2233
proxy_use_local_resolver
editThe proxy_use_local_resolver
option determines if Logstash hostnames are
resolved locally when using a proxy. The default value is false which means
that when a proxy is used the name resolution occurs on the proxy server.
index
editThe index root name to write events to. The default is the Beat name. For example "filebeat" generates "[filebeat-]YYYY.MM.DD" indexes (for example, "filebeat-2015.04.26").
ssl
editConfiguration options for SSL parameters like the root CA for Logstash connections. See SSL Configuration for more information. To use SSL, you must also configure the Beats input plugin for Logstash to use SSL/TLS.
timeout
editThe number of seconds to wait for responses from the Logstash server before timing out. The default is 30 (seconds).
max_retries
editThe number of times to retry publishing an event after a publishing failure.
After the specified number of retries, the events are typically dropped.
Some Beats, such as Filebeat, ignore the max_retries
setting and retry until all
events are published.
Set max_retries
to a value less than 0 to retry until all events are published.
The default is 3.
bulk_max_size
editThe maximum number of events to bulk in a single Logstash request. The default is 2048.
If the Beat sends single events, the events are collected into batches. If the Beat publishes
a large batch of events (larger than the value specified by bulk_max_size
), the batch is
split.
Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.
Setting bulk_max_size
to values less than or equal to 0 disables buffering in libbeat. When buffering is disabled,
Beats that publish single events (such as Packetbeat) send each event directly to
Elasticsearch. Beats that publish data in batches (such as Filebeat) send events in batches based on the
spooler size.