Elastic Logging Plugin configuration options
editElastic Logging Plugin configuration options
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
Use the following options to configure the Elastic Logging Plugin for Docker. You can
pass these options with the --log-opt
flag when you start a container, or
you can set them in the daemon.json
file for all containers.
Usage examples
editTo set configuration options when you start a container:
docker run --log-driver=elastic/elastic-logging-plugin:7.7.1 \ --log-opt output.elasticsearch.hosts="https://myhost:9200" \ --log-opt output.elasticsearch.username="myusername" \ --log-opt output.elasticsearch.password="mypassword" \ --log-opt output.elasticsearch.index="elastic-log-driver-%{+yyyy.MM.dd}" \ -it debian:jessie /bin/bash
To set configuration options for all containers in the daemon.json
file:
{ "log-driver" : "elastic/elastic-logging-plugin:7.7.1", "log-opts" : { "output.elasticsearch.hosts" : "https://myhost:9200", "output.elasticsearch.username" : "myusername", "output.elasticsearch.password" : "mypassword", "output.elasticsearch.index" : "elastic-log-driver-%{+yyyy.MM.dd}" } }
For more examples, see Usage examples.
Elastic Cloud options
editOption | Description |
---|---|
|
The Cloud ID found in the Elastic Cloud web console. This ID is used to resolve the Elastic Stack URLs when connecting to Elasticsearch Service on Elastic Cloud. |
|
The username and password combination for connecting to Elasticsearch Service on Elastic Cloud. The
format is |
Elasticsearch output options
editOption | Default | Description |
---|---|---|
|
|
The list of Elasticsearch nodes to connect to. Specify each node as a |
|
|
The protocol ( |
|
The basic authentication username for connecting to Elasticsearch. |
|
|
The basic authentication password for connecting to Elasticsearch. |
|
|
A format string
value that specifies the index to write events to when you’re using daily
indices. For example: |
|
Advanced: |
||
|
|
The number of seconds to wait before trying to reconnect to Elasticsearch after
a network error. After waiting |
|
|
The maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error. |
|
|
The maximum number of events to bulk in a single Elasticsearch bulk API index request. Specify 0 to allow the queue to determine the batch size. |
|
|
The gzip compression level. Valid compression levels range from 1 (best speed) to 9 (best compression). Specify 0 to disable compression. Higher compression levels reduce network usage, but increase CPU usage. |
|
|
Whether to escape HTML in strings. |
|
Custom HTTP headers to add to each request created by the Elasticsearch output. Specify multiple header values for the same header name by separating them with a comma. |
|
|
|
Whether to load balance when sending events to multiple hosts. The load
balancer also supports multiple workers per host (see
|
|
|
The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. Specify 0 to retry indefinitely. |
|
A dictionary of HTTP parameters to pass within the URL with index operations. |
|
|
An HTTP path prefix that is prepended to the HTTP API calls. This is useful for cases where Elasticsearch listens behind an HTTP reverse proxy that exports the API under a custom prefix. |
|
|
A format string value that specifies the ingest node pipeline to write events to. |
|
|
The URL of the proxy to use when connecting to the Elasticsearch servers. Specify a
|
|
|
|
The HTTP request timeout in seconds for the Elasticsearch request. |
|
|
The number of workers per configured host publishing events to Elasticsearch. Use with
load balancing mode ( |
Logstash output options
editOption | Default | Description |
---|---|---|
|
|
The list of known Logstash servers to connect to. If load balancing is
disabled, but multiple hosts are configured, one host is selected randomly
(there is no precedence). If one host becomes unreachable, another one is
selected randomly. If no port is specified, the default is |
|
The index root name to write events to. For example |
|
Advanced: |
||
|
|
The number of seconds to wait before trying to reconnect to Logstash after
a network error. After waiting |
|
|
The maximum number of seconds to wait before attempting to connect to Logstash after a network error. |
|
|
The maximum number of events to bulk in a single Logstash request. Specify 0 to allow the queue to determine the batch size. |
|
|
The gzip compression level. Valid compression levels range from 1 (best speed) to 9 (best compression). Specify 0 to disable compression. Higher compression levels reduce network usage, but increase CPU usage. |
|
|
Whether to escape HTML in strings. |
|
|
Whether to load balance when sending events to multiple Logstash hosts. If set to
|
|
|
The number of batches to send asynchronously to Logstash while waiting for an ACK from Logstash. Specify 0 to disable pipelining. |
|
The URL of the SOCKS5 proxy to use when connecting to the Logstash servers. The
value must be a URL with a scheme of |
|
|
|
Whether to resolve Logstash hostnames locally when using a proxy. If |
|
|
When enabled, only a subset of events in a batch are transferred per
transaction. If there are no errors, the number of events per transaction
is increased up to the bulk max size (see |
|
|
The number of seconds to wait for responses from the Logstash server before timing out. |
|
|
Time to live for a connection to Logstash after which the connection will be
re-established. Useful when Logstash hosts represent load balancers. Because
connections to Logstash hosts are sticky, operating behind load balancers can lead
to uneven load distribution across instances. Specify a TTL on the connection
to distribute connections across instances. Specify 0 to disable this feature.
This option is not supported if |
|
|
The number of workers per configured host publishing events to Logstash. Use with
load balancing mode ( |
Kafka output options
editComing in a future update. This documentation is a work in progress.
Need the docs now? See the Kafka output docs for Filebeat. The Elastic Logging Plugin supports most of the same options, just make sure you use the fully qualified setting names.
Redis output options
editComing in a future update. This documentation is a work in progress.
Need the docs now? See the Redis output docs for Filebeat. The Elastic Logging Plugin supports most of the same options, just make sure you use the fully qualified setting names.