- Metricbeat Reference: other versions:
- Overview
- Getting started with Metricbeat
- Setting up and running Metricbeat
- Upgrading Metricbeat
- How Metricbeat works
- Configuring Metricbeat
- Specify which modules to run
- Specify general settings
- Load external configuration files
- Configure the internal queue
- Configure the output
- Specify SSL settings
- Filter and enhance the exported data
- Parse data by using ingest node
- Set up project paths
- Set up the Kibana endpoint
- Load the Kibana dashboards
- Load the Elasticsearch index template
- Configure logging
- Use environment variables in the configuration
- Autodiscover
- YAML tips and gotchas
- Regular expression support
- metricbeat.reference.yml
- Modules
- Aerospike module
- Apache module
- Ceph module
- Couchbase module
- Docker module
- Dropwizard module
- Elasticsearch module
- Etcd module
- Golang module
- Graphite module
- HAProxy module
- HTTP module
- Jolokia module
- Kafka module
- Kibana module
- Kubernetes module
- Kubernetes container metricset
- Kubernetes event metricset
- Kubernetes node metricset
- Kubernetes pod metricset
- Kubernetes state_container metricset
- Kubernetes state_deployment metricset
- Kubernetes state_node metricset
- Kubernetes state_pod metricset
- Kubernetes state_replicaset metricset
- Kubernetes state_statefulset metricset
- Kubernetes system metricset
- Kubernetes volume metricset
- kvm module
- Logstash module
- Memcached module
- MongoDB module
- Munin module
- MySQL module
- Nginx module
- PHP_FPM module
- PostgreSQL module
- Prometheus module
- RabbitMQ module
- Redis module
- System module
- System core metricset
- System cpu metricset
- System diskio metricset
- System filesystem metricset
- System fsstat metricset
- System load metricset
- System memory metricset
- System network metricset
- System process metricset
- System process_summary metricset
- System raid metricset
- system raid MetricSet
- System socket metricset
- System uptime metricset
- uwsgi module
- vSphere module
- Windows module
- ZooKeeper module
- Exported fields
- Aerospike fields
- Apache fields
- Beat fields
- Ceph fields
- Cloud provider metadata fields
- Common fields
- Couchbase fields
- Docker fields
- Docker fields
- Dropwizard fields
- Elasticsearch fields
- Etcd fields
- Golang fields
- Graphite fields
- HAProxy fields
- Host fields
- HTTP fields
- Jolokia fields
- Kafka fields
- Kibana fields
- Kubernetes fields
- Kubernetes fields
- kvm fields
- Logstash fields
- Memcached fields
- MongoDB fields
- Munin fields
- MySQL fields
- Nginx fields
- PHP_FPM fields
- PostgreSQL fields
- Prometheus fields
- RabbitMQ fields
- Redis fields
- System fields
- uwsgi fields
- vSphere fields
- Windows fields
- ZooKeeper fields
- Monitoring Metricbeat
- Securing Metricbeat
- Troubleshooting
- Contributing to Beats
Configure the Elasticsearch output
editConfigure the Elasticsearch output
editWhen you specify Elasticsearch for the output, Metricbeat sends the transactions directly to Elasticsearch by using the Elasticsearch HTTP API.
Example configuration:
output.elasticsearch: hosts: ["https://localhost:9200"] index: "metricbeat-%{[beat.version]}-%{+yyyy.MM.dd}" ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] ssl.certificate: "/etc/pki/client/cert.pem" ssl.key: "/etc/pki/client/cert.key"
To enable SSL, just add https
to all URLs defined under hosts.
output.elasticsearch: hosts: ["https://localhost:9200"] username: "metricbeat_internal" password: "YOUR_PASSWORD"
If the Elasticsearch nodes are defined by IP:PORT
, then add protocol: https
to the yaml file.
output.elasticsearch: hosts: ["localhost"] protocol: "https" username: "{beatname_lc}_internal" password: "{pwd}"
For more information about securing Metricbeat, see Securing Metricbeat.
Compatibility
editThis output works with all compatible versions of Elasticsearch. See the Elastic Support Matrix.
Configuration options
editYou can specify the following options in the elasticsearch
section of the metricbeat.yml
config file:
enabled
editThe enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled.
The default value is true.
hosts
editThe list of Elasticsearch nodes to connect to. The events are distributed to
these nodes in round robin order. If one node becomes unreachable, the event is
automatically sent to another node. Each Elasticsearch node can be defined as a URL
or IP:PORT
.
For example: http://192.15.3.2
, https://es.found.io:9230
or 192.24.3.2:9300
.
If no port is specified, 9200
is used.
output.elasticsearch: hosts: ["10.45.3.2:9220", "10.45.3.1:9230"] protocol: https path: /elasticsearch
In the previous example, the Elasticsearch nodes are available at https://10.45.3.2:9220/elasticsearch
and
https://10.45.3.1:9230/elasticsearch
.
compression_level
editThe gzip compression level. Setting this value to 0 disables compression. The compression level must be in the range of 1 (best speed) to 9 (best compression).
Increasing the compression level will reduce the network usage but will increase the cpu usage.
The default value is 0.
worker
editThe number of workers per configured host publishing events to Elasticsearch. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host).
The default value is 1.
username
editThe basic authentication username for connecting to Elasticsearch.
password
editThe basic authentication password for connecting to Elasticsearch.
parameters
editDictionary of HTTP parameters to pass within the url with index operations.
protocol
editThe name of the protocol Elasticsearch is reachable on. The options are:
http
or https
. The default is http
. However, if you specify a URL for
hosts
, the value of protocol
is overridden by whatever scheme you
specify in the URL.
path
editAn HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where Elasticsearch listens behind an HTTP reverse proxy that exports the API under a custom prefix.
headers
editCustom HTTP headers to add to each request created by the Elasticsearch output. Example:
output.elasticsearch.headers: X-My-Header: Header contents
It is generally possible to specify multiple header values for the same header name by separating them with a comma.
proxy_url
editThe URL of the proxy to use when connecting to the Elasticsearch servers. The value may be either a complete URL or a "host[:port]", in which case the "http" scheme is assumed. If a value is not specified through the configuration file then proxy environment variables are used. See the golang documentation for more information about the environment variables.
index
editThe index name to write events to. The default is
"metricbeat-%{[beat.version]}-%{+yyyy.MM.dd}"
(for example,
"metricbeat-6.3.2-2017.04.26"
). If you change this setting, you also
need to configure the setup.template.name
and setup.template.pattern
options
(see Load the Elasticsearch index template). If you are using the pre-built Kibana
dashboards, you also need to set the setup.dashboards.index
option (see
Load the Kibana dashboards).
indices
editArray of index selector rules supporting conditionals, format string
based field access and name mappings. The first rule matching will be used to
set the index
for the event to be published. If indices
is missing or no
rule matches, the index
field will be used.
Rule settings:
index
: The index format string to use. If the fields used are missing, the rule fails.
mapping
: Dictionary mapping index names to new names
default
: Default string value if mapping
does not find a match.
when
: Condition which must succeed in order to execute the current rule.
Examples elasticsearch output with indices
:
output.elasticsearch: hosts: ["http://localhost:9200"] index: "logs-%{[beat.version]}-%{+yyyy.MM.dd}" indices: - index: "critical-%{[beat.version]}-%{+yyyy.MM.dd}" when.contains: message: "CRITICAL" - index: "error-%{[beat.version]}-%{+yyyy.MM.dd}" when.contains: message: "ERR"
pipeline
editA format string value that specifies the ingest node pipeline to write events to.
output.elasticsearch: hosts: ["http://localhost:9200"] pipeline: my_pipeline_id
For more information, see Parse data by using ingest node.
pipelines
editSimilar to the indices
array, this is an array of pipeline selector
configurations supporting conditionals, format string based field access
and name mappings. The first rule matching will be used to set the
pipeline
for the event to be published. If pipelines
is missing or
no rule matches, the pipeline
field will be used.
Example elasticsearch output with pipelines
:
filebeat.inputs: - type: log paths: ["/var/log/app/normal/*.log"] fields: type: "normal" - type: log paths: ["/var/log/app/critical/*.log"] fields: type: "critical" output.elasticsearch: hosts: ["http://localhost:9200"] index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}" pipelines: - pipeline: critical_pipeline when.equals: fields.type: "critical" - pipeline: normal_pipeline when.equals: fields.type: "normal"
max_retries
editThe number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped.
Set max_retries
to a value less than 0 to retry until all events are published.
The default is 3.
bulk_max_size
editThe maximum number of events to bulk in a single Elasticsearch bulk API index request. The default is 50.
Events can be collected into batches. Metricbeat will split batches larger than bulk_max_size
into multiple batches.
Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.
Setting bulk_max_size
to values less than or equal to 0 disables the
splitting of batches. When splitting is disabled, the queue decides on the
number of events to be contained in a batch.
backoff.init
editThe number of seconds to wait before trying to reconnect to Elasticsearch after
a network error. After waiting backoff.init
seconds, Metricbeat tries to
reconnect. If the attempt fails, the backoff timer is increased exponentially up
to backoff.max
. After a successful connection, the backoff timer is reset. The
default is 1s.
backoff.max
editThe maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error. The default is 60s.
timeout
editThe http request timeout in seconds for the Elasticsearch request. The default is 90.
ssl
editConfiguration options for SSL parameters like the certificate authority to use
for HTTPS-based connections. If the ssl
section is missing, the host CAs are used for HTTPS connections to
Elasticsearch.
See Specify SSL settings for more information.
On this page