Kafka output plugin
editKafka output plugin
edit- Plugin version: v7.1.3
- Released on: 2018-08-27
- Changelog
For other versions, see the Versioned plugin docs.
Getting Help
editFor questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Description
editWrite events to a Kafka topic.
This plugin uses Kafka Client 1.1.0. For broker compatibility, see the official Kafka compatibility reference. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
If you require features not yet available in this plugin (including client version upgrades), please file an issue with details about what you need.
This output supports connecting to Kafka over:
- SSL (requires plugin version 3.0.0 or later)
- Kerberos SASL (requires plugin version 5.1.0 or later)
By default security is disabled but can be turned on as needed.
The only required configuration is the topic_id.
The default codec is plain. Logstash will encode your events with not only the message field but also with a timestamp and hostname.
If you want the full content of your events to be sent as json, you should set the codec in the output configuration like this:
output { kafka { codec => json topic_id => "mytopic" } }
For more information see http://kafka.apache.org/documentation.html#theproducer
Kafka producer configuration: http://kafka.apache.org/documentation.html#newproducerconfigs
Kafka Output Configuration Options
editThis plugin supports the following configuration options plus the Common Options described later.
Setting | Input type | Required |
---|---|---|
string, one of |
No |
|
No |
||
No |
||
No |
||
No |
||
string, one of |
No |
|
a valid filesystem path |
No |
|
a valid filesystem path |
No |
|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
string, one of |
No |
|
No |
||
No |
||
a valid filesystem path |
No |
|
No |
||
No |
||
a valid filesystem path |
No |
|
No |
||
No |
||
Yes |
||
No |
Also see Common Options for a list of options supported by all output plugins.
acks
edit-
Value can be any of:
0
,1
,all
-
Default value is
"1"
The number of acknowledgments the producer requires the leader to have received before considering a request complete.
acks=0, the producer will not wait for any acknowledgment from the server at all. acks=1, This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. acks=all, This means the leader will wait for the full set of in-sync replicas to acknowledge the record.
batch_size
edit- Value type is number
-
Default value is
16384
The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes.
bootstrap_servers
edit- Value type is string
-
Default value is
"localhost:9092"
This is for bootstrapping and the producer will only use it for getting metadata (topics,
partitions and replicas). The socket connections for sending the actual data will be
established based on the broker information returned in the metadata. The format is
host1:port1,host2:port2
, and the list can be a subset of brokers or a VIP pointing to a
subset of brokers.
buffer_memory
edit- Value type is number
-
Default value is
33554432
The total bytes of memory the producer can use to buffer records waiting to be sent to the server.
client_id
edit- Value type is string
- There is no default value for this setting.
The id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included with the request
compression_type
edit-
Value can be any of:
none
,gzip
,snappy
,lz4
-
Default value is
"none"
The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are none, gzip, or snappy.
jaas_path
edit- Value type is path
- There is no default value for this setting.
The Java Authentication and Authorization Service (JAAS) API supplies user authentication and authorization services for Kafka. This setting provides the path to the JAAS file. Sample JAAS file for Kafka client:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true renewTicket=true serviceName="kafka"; };
Please note that specifying jaas_path
and kerberos_config
in the config file will add these
to the global JVM system properties. This means if you have multiple Kafka inputs, all of them would be sharing the same
jaas_path
and kerberos_config
. If this is not desirable, you would have to run separate instances of Logstash on
different JVM instances.
kerberos_config
edit- Value type is path
- There is no default value for this setting.
Optional path to kerberos config file. This is krb5.conf style as detailed in https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html
key_serializer
edit- Value type is string
-
Default value is
"org.apache.kafka.common.serialization.StringSerializer"
Serializer class for the key of the message
linger_ms
edit- Value type is number
-
Default value is
0
The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay—that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together.
message_key
edit- Value type is string
- There is no default value for this setting.
The key for the message
metadata_fetch_timeout_ms
edit- Value type is number
-
Default value is
60000
the timeout setting for initial metadata request to fetch topic metadata.
metadata_max_age_ms
edit- Value type is number
-
Default value is
300000
the max time in milliseconds before a metadata refresh is forced.
receive_buffer_bytes
edit- Value type is number
-
Default value is
32768
The size of the TCP receive buffer to use when reading data
reconnect_backoff_ms
edit- Value type is number
-
Default value is
10
The amount of time to wait before attempting to reconnect to a given host when a connection fails.
request_timeout_ms
edit- Value type is string
- There is no default value for this setting.
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.
retries
edit- Value type is number
- There is no default value for this setting.
The default retry behavior is to retry until successful. To prevent data loss, the use of this setting is discouraged.
If you choose to set retries
, a value greater than zero will cause the
client to only retry a fixed number of times. This will result in data loss
if a transport fault exists for longer than your retry count (network outage,
Kafka down, etc).
A value less than zero is a configuration error.
retry_backoff_ms
edit- Value type is number
-
Default value is
100
The amount of time to wait before attempting to retry a failed produce request to a given topic partition.
sasl_kerberos_service_name
edit- Value type is string
- There is no default value for this setting.
The Kerberos principal name that Kafka broker runs as. This can be defined either in Kafka’s JAAS config or in Kafka’s config.
sasl_mechanism
edit- Value type is string
-
Default value is
"GSSAPI"
SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.
security_protocol
edit-
Value can be any of:
PLAINTEXT
,SSL
,SASL_PLAINTEXT
,SASL_SSL
-
Default value is
"PLAINTEXT"
Security protocol to use, which can be either of PLAINTEXT,SSL,SASL_PLAINTEXT,SASL_SSL
send_buffer_bytes
edit- Value type is number
-
Default value is
131072
The size of the TCP send buffer to use when sending data.
ssl_key_password
edit- Value type is password
- There is no default value for this setting.
The password of the private key in the key store file.
ssl_keystore_location
edit- Value type is path
- There is no default value for this setting.
If client authentication is required, this setting stores the keystore path.
ssl_keystore_password
edit- Value type is password
- There is no default value for this setting.
If client authentication is required, this setting stores the keystore password
ssl_keystore_type
edit- Value type is string
- There is no default value for this setting.
The keystore type.
ssl_truststore_location
edit- Value type is path
- There is no default value for this setting.
The JKS truststore path to validate the Kafka broker’s certificate.
ssl_truststore_password
edit- Value type is password
- There is no default value for this setting.
The truststore password
ssl_truststore_type
edit- Value type is string
- There is no default value for this setting.
The truststore type.
Common Options
editThe following configuration options are supported by all output plugins:
codec
edit- Value type is codec
-
Default value is
"plain"
The codec used for output data. Output codecs are a convenient method for encoding your data before it leaves the output without needing a separate filter in your Logstash pipeline.
enable_metric
edit- Value type is boolean
-
Default value is
true
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
id
edit- Value type is string
- There is no default value for this setting.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.
It is strongly recommended to set this ID in your configuration. This is particularly useful
when you have two or more plugins of the same type. For example, if you have 2 kafka outputs.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
output { kafka { id => "my_plugin_id" } }