- Logstash Reference: other versions:
- Logstash Introduction
- Getting Started with Logstash
- How Logstash Works
- Setting Up and Running Logstash
- Logstash Directory Layout
- Logstash Configuration Files
- logstash.yml
- Secrets keystore for secure settings
- Running Logstash from the Command Line
- Running Logstash as a Service on Debian or RPM
- Running Logstash on Docker
- Configuring Logstash for Docker
- Running Logstash on Windows
- Logging
- Shutting Down Logstash
- Upgrading Logstash
- Configuring Logstash
- Secure your connection
- Advanced Logstash Configurations
- Logstash-to-Logstash communication
- Managing Logstash
- Working with Logstash Modules
- Working with Filebeat Modules
- Queues and data resiliency
- Transforming Data
- Deploying and Scaling Logstash
- Performance Tuning
- Monitoring Logstash
- Monitoring Logstash with APIs
- Working with plugins
- Integration plugins
- Input plugins
- azure_event_hubs
- beats
- cloudwatch
- couchdb_changes
- dead_letter_queue
- elastic_agent
- elasticsearch
- exec
- file
- ganglia
- gelf
- generator
- github
- google_cloud_storage
- google_pubsub
- graphite
- heartbeat
- http
- http_poller
- imap
- irc
- java_generator
- java_stdin
- jdbc
- jms
- jmx
- kafka
- kinesis
- log4j
- lumberjack
- meetup
- pipe
- puppet_facter
- rabbitmq
- redis
- relp
- rss
- s3
- s3-sns-sqs
- salesforce
- snmp
- snmptrap
- sqlite
- sqs
- stdin
- stomp
- syslog
- tcp
- udp
- unix
- varnishlog
- websocket
- wmi
- xmpp
- Output plugins
- boundary
- circonus
- cloudwatch
- csv
- datadog
- datadog_metrics
- dynatrace
- elastic_app_search
- elastic_workplace_search
- elasticsearch
- exec
- file
- ganglia
- gelf
- google_bigquery
- google_cloud_storage
- google_pubsub
- graphite
- graphtastic
- http
- influxdb
- irc
- java_stdout
- juggernaut
- kafka
- librato
- loggly
- lumberjack
- metriccatcher
- mongodb
- nagios
- nagios_nsca
- opentsdb
- pagerduty
- pipe
- rabbitmq
- redis
- redmine
- riak
- riemann
- s3
- sink
- sns
- solr_http
- sqs
- statsd
- stdout
- stomp
- syslog
- tcp
- timber
- udp
- webhdfs
- websocket
- xmpp
- zabbix
- Filter plugins
- age
- aggregate
- alter
- bytes
- cidr
- cipher
- clone
- csv
- date
- de_dot
- dissect
- dns
- drop
- elapsed
- elasticsearch
- environment
- extractnumbers
- fingerprint
- geoip
- grok
- http
- i18n
- java_uuid
- jdbc_static
- jdbc_streaming
- json
- json_encode
- kv
- memcached
- metricize
- metrics
- mutate
- prune
- range
- ruby
- sleep
- split
- syslog_pri
- threats_classifier
- throttle
- tld
- translate
- truncate
- urldecode
- useragent
- uuid
- wurfl_device_detection
- xml
- Codec plugins
- Tips and best practices
- Troubleshooting
- Contributing to Logstash
- How to write a Logstash input plugin
- How to write a Logstash codec plugin
- How to write a Logstash filter plugin
- How to write a Logstash output plugin
- Logstash Plugins Community Maintainer Guide
- Document your plugin
- Publish your plugin to RubyGems.org
- List your plugin
- Contributing a patch to a Logstash plugin
- Extending Logstash core
- Contributing a Java Plugin
- Glossary of Terms
- Breaking changes
- Release Notes
- Logstash 8.1.3 Release Notes
- Logstash 8.1.2 Release Notes
- Logstash 8.1.1 Release Notes
- Logstash 8.1.0 Release Notes
- Logstash 8.0.1 Release Notes
- Logstash 8.0.0 Release Notes
- Logstash 8.0.0-rc2 Release Notes
- Logstash 8.0.0-rc1 Release Notes
- Logstash 8.0.0-beta1 Release Notes
- Logstash 8.0.0-alpha2 Release Notes
- Logstash 8.0.0-alpha1 Release Notes
Kv filter plugin
editKv filter plugin
edit- Plugin version: v4.6.0
- Released on: 2022-01-31
- Changelog
For other versions, see the Versioned plugin docs.
Getting Help
editFor questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Description
editThis filter helps automatically parse messages (or specific event fields)
which are of the foo=bar
variety.
For example, if you have a log message which contains ip=1.2.3.4
error=REFUSED
, you can parse those automatically by configuring:
filter { kv { } }
The above will result in a message of ip=1.2.3.4 error=REFUSED
having
the fields:
-
ip: 1.2.3.4
-
error: REFUSED
This is great for postfix, iptables, and other types of logs that
tend towards key=value
syntax.
You can configure any arbitrary strings to split your data on,
in case your data is not structured using =
signs and whitespace.
For example, this filter can also be used to parse query parameters like
foo=bar&baz=fizz
by setting the field_split
parameter to &
.
Event Metadata and the Elastic Common Schema (ECS)
editThe plugin behaves the same regardless of ECS compatibility, except giving a warning when ECS is enabled and target
isn’t set.
Set the target
option to avoid potential schema conflicts.
Kv Filter Configuration Options
editThis plugin supports the following configuration options plus the Common Options described later.
Setting | Input type | Required |
---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
string, one of |
No |
|
string, one of |
No |
|
No |
||
No |
||
No |
||
No |
||
string, one of |
No |
Also see Common Options for a list of options supported by all filter plugins.
allow_duplicate_values
edit- Value type is boolean
-
Default value is
true
A bool option for removing duplicate key/value pairs. When set to false, only one unique key/value pair will be preserved.
For example, consider a source like from=me from=me
. [from]
will map to
an Array with two elements: ["me", "me"]
. To only keep unique key/value pairs,
you could use this configuration:
filter { kv { allow_duplicate_values => false } }
allow_empty_values
edit- Value type is boolean
-
Default value is
false
A bool option for explicitly including empty values. When set to true, empty values will be added to the event.
Parsing empty values typically requires whitespace => strict
.
default_keys
edit- Value type is hash
-
Default value is
{}
A hash specifying the default keys and their values which should be added to the event in case these keys do not exist in the source field being parsed.
filter { kv { default_keys => [ "from", "logstash@example.com", "to", "default@dev.null" ] } }
ecs_compatibility
edit- Value type is string
-
Supported values are:
-
disabled
: does not use ECS-compatible field names -
v1
: Elastic Common Schema compliant behavior (warns whentarget
isn’t set)
-
Controls this plugin’s compatibility with the Elastic Common Schema (ECS). See Event Metadata and the Elastic Common Schema (ECS) for detailed information.
exclude_keys
edit- Value type is array
-
Default value is
[]
An array specifying the parsed keys which should not be added to the event. By default no keys will be excluded.
For example, consider a source like Hey, from=<abc>, to=def foo=bar
.
To exclude from
and to
, but retain the foo
key, you could use this configuration:
filter { kv { exclude_keys => [ "from", "to" ] } }
field_split
edit- Value type is string
-
Default value is
" "
A string of characters to use as single-character field delimiters for parsing out key-value pairs.
These characters form a regex character class and thus you must escape special regex
characters like [
or ]
using \
.
Example with URL Query Strings
For example, to split out the args from a url query string such as
?pin=12345~0&d=123&e=foo@bar.com&oq=bobo&ss=12345
:
filter { kv { field_split => "&?" } }
The above splits on both &
and ?
characters, giving you the following
fields:
-
pin: 12345~0
-
d: 123
-
e: foo@bar.com
-
oq: bobo
-
ss: 12345
field_split_pattern
edit- Value type is string
- There is no default value for this setting.
A regex expression to use as field delimiter for parsing out key-value pairs.
Useful to define multi-character field delimiters.
Setting the field_split_pattern
options will take precedence over the field_split
option.
Note that you should avoid using captured groups in your regex and you should be cautious with lookaheads or lookbehinds and positional anchors.
For example, to split fields on a repetition of one or more colons
k1=v1:k2=v2::k3=v3:::k4=v4
:
filter { kv { field_split_pattern => ":+" } }
To split fields on a regex character that need escaping like the plus sign
k1=v1++k2=v2++k3=v3++k4=v4
:
filter { kv { field_split_pattern => "\\+\\+" } }
include_brackets
edit- Value type is boolean
-
Default value is
true
A boolean specifying whether to treat square brackets, angle brackets, and parentheses as value "wrappers" that should be removed from the value.
filter { kv { include_brackets => true } }
For example, the result of this line:
bracketsone=(hello world) bracketstwo=[hello world] bracketsthree=<hello world>
will be:
- bracketsone: hello world
- bracketstwo: hello world
- bracketsthree: hello world
instead of:
- bracketsone: (hello
- bracketstwo: [hello
- bracketsthree: <hello
include_keys
edit- Value type is array
-
Default value is
[]
An array specifying the parsed keys which should be added to the event. By default all keys will be added.
For example, consider a source like Hey, from=<abc>, to=def foo=bar
.
To include from
and to
, but exclude the foo
key, you could use this configuration:
filter { kv { include_keys => [ "from", "to" ] } }
prefix
edit- Value type is string
-
Default value is
""
A string to prepend to all of the extracted keys.
For example, to prepend arg_ to all keys:
filter { kv { prefix => "arg_" } }
recursive
edit- Value type is boolean
-
Default value is
false
A boolean specifying whether to drill down into values and recursively get more key-value pairs from it. The extra key-value pairs will be stored as subkeys of the root key.
Default is not to recursive values.
filter { kv { recursive => "true" } }
remove_char_key
edit- Value type is string
- There is no default value for this setting.
A string of characters to remove from the key.
These characters form a regex character class and thus you must escape special regex
characters like [
or ]
using \
.
Contrary to trim option, all characters are removed from the key, whatever their position.
For example, to remove <
>
[
]
and ,
characters from keys:
filter { kv { remove_char_key => "<>\[\]," } }
remove_char_value
edit- Value type is string
- There is no default value for this setting.
A string of characters to remove from the value.
These characters form a regex character class and thus you must escape special regex
characters like [
or ]
using \
.
Contrary to trim option, all characters are removed from the value, whatever their position.
For example, to remove <
, >
, [
, ]
and ,
characters from values:
filter { kv { remove_char_value => "<>\[\]," } }
source
edit- Value type is string
-
Default value is
"message"
The field to perform key=value
searching on
For example, to process the not_the_message
field:
filter { kv { source => "not_the_message" } }
target
edit- Value type is string
- There is no default value for this setting.
The name of the container to put all of the key-value pairs into.
If this setting is omitted, fields will be written to the root of the event, as individual fields.
For example, to place all keys into the event field kv:
filter { kv { target => "kv" } }
tag_on_failure
edit- Value type is string
-
The default value for this setting is
_kv_filter_error
.
When a kv operation causes a runtime exception to be thrown within the plugin, the operation is safely aborted without crashing the plugin, and the event is tagged with the provided value.
tag_on_timeout
edit- Value type is string
-
The default value for this setting is
_kv_filter_timeout
.
When timeouts are enabled and a kv operation is aborted, the event is tagged
with the provided value (see: timeout_millis
).
timeout_millis
edit- Value type is number
- The default value for this setting is 30000 (30 seconds).
-
Set to zero (
0
) to disable timeouts
Timeouts provide a safeguard against inputs that are pathological against the
regular expressions that are used to extract key/value pairs. When parsing an
event exceeds this threshold the operation is aborted and the event is tagged
in order to prevent the operation from blocking the pipeline
(see: tag_on_timeout
).
transform_key
edit-
Value can be any of:
lowercase
,uppercase
,capitalize
- There is no default value for this setting.
Transform keys to lower case, upper case or capitals.
For example, to lowercase all keys:
filter { kv { transform_key => "lowercase" } }
transform_value
edit-
Value can be any of:
lowercase
,uppercase
,capitalize
- There is no default value for this setting.
Transform values to lower case, upper case or capitals.
For example, to capitalize all values:
filter { kv { transform_value => "capitalize" } }
trim_key
edit- Value type is string
- There is no default value for this setting.
A string of characters to trim from the key. This is useful if your keys are wrapped in brackets or start with space.
These characters form a regex character class and thus you must escape special regex
characters like [
or ]
using \
.
Only leading and trailing characters are trimed from the key.
For example, to trim <
>
[
]
and ,
characters from keys:
filter { kv { trim_key => "<>\[\]," } }
trim_value
edit- Value type is string
- There is no default value for this setting.
Constants used for transform check A string of characters to trim from the value. This is useful if your values are wrapped in brackets or are terminated with commas (like postfix logs).
These characters form a regex character class and thus you must escape special regex
characters like [
or ]
using \
.
Only leading and trailing characters are trimed from the value.
For example, to trim <
, >
, [
, ]
and ,
characters from values:
filter { kv { trim_value => "<>\[\]," } }
value_split
edit- Value type is string
-
Default value is
"="
A non-empty string of characters to use as single-character value delimiters for parsing out key-value pairs.
These characters form a regex character class and thus you must escape special regex
characters like [
or ]
using \
.
For example, to identify key-values such as
key1:value1 key2:value2
:
filter { kv { value_split => ":" } }
value_split_pattern
edit- Value type is string
- There is no default value for this setting.
A regex expression to use as value delimiter for parsing out key-value pairs.
Useful to define multi-character value delimiters.
Setting the value_split_pattern
options will take precedence over the value_split option
.
Note that you should avoid using captured groups in your regex and you should be cautious with lookaheads or lookbehinds and positional anchors.
See field_split_pattern
for examples.
whitespace
edit-
Value can be any of:
lenient
,strict
-
Default value is
lenient
An option specifying whether to be lenient or strict with the acceptance of unnecessary whitespace surrounding the configured value-split sequence.
By default the plugin is run in lenient
mode, which ignores spaces that occur before or
after the value-splitter. While this allows the plugin to make reasonable guesses with most
input, in some situations it may be too lenient.
You may want to enable whitespace => strict
mode if you have control of the input data and
can guarantee that no extra spaces are added surrounding the pattern you have defined for
splitting values. Doing so will ensure that a field-splitter sequence immediately following
a value-splitter will be interpreted as an empty field.
Common Options
editThe following configuration options are supported by all filter plugins:
Setting | Input type | Required |
---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
add_field
edit- Value type is hash
-
Default value is
{}
If this filter is successful, add any arbitrary fields to this event.
Field names can be dynamic and include parts of the event using the %{field}
.
Example:
filter { kv { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" } } }
# You can also add multiple fields at once: filter { kv { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" "new_field" => "new_static_value" } } }
If the event has field "somefield" == "hello"
this filter, on success,
would add field foo_hello
if it is present, with the
value above and the %{host}
piece replaced with that value from the
event. The second example would also add a hardcoded field.
add_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, add arbitrary tags to the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { kv { add_tag => [ "foo_%{somefield}" ] } }
# You can also add multiple tags at once: filter { kv { add_tag => [ "foo_%{somefield}", "taggedy_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would add a tag foo_hello
(and the second example would of course add a taggedy_tag
tag).
enable_metric
edit- Value type is boolean
-
Default value is
true
Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
id
edit- Value type is string
- There is no default value for this setting.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.
It is strongly recommended to set this ID in your configuration. This is particularly useful
when you have two or more plugins of the same type, for example, if you have 2 kv filters.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
filter { kv { id => "ABC" } }
Variable substitution in the id
field only supports environment variables
and does not support the use of values from the secret store.
periodic_flush
edit- Value type is boolean
-
Default value is
false
Call the filter flush method at regular interval. Optional.
remove_field
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the %{field} Example:
filter { kv { remove_field => [ "foo_%{somefield}" ] } }
# You can also remove multiple fields at once: filter { kv { remove_field => [ "foo_%{somefield}", "my_extraneous_field" ] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the field with name foo_hello
if it is present. The second
example would remove an additional, non-dynamic field.
remove_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary tags from the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { kv { remove_tag => [ "foo_%{somefield}" ] } }
# You can also remove multiple tags at once: filter { kv { remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the tag foo_hello
if it is present. The second example
would remove a sad, unwanted tag as well.
On this page
- Getting Help
- Description
- Event Metadata and the Elastic Common Schema (ECS)
- Kv Filter Configuration Options
allow_duplicate_values
allow_empty_values
default_keys
ecs_compatibility
exclude_keys
field_split
field_split_pattern
include_brackets
include_keys
prefix
recursive
remove_char_key
remove_char_value
source
target
tag_on_failure
tag_on_timeout
timeout_millis
transform_key
transform_value
trim_key
trim_value
value_split
value_split_pattern
whitespace
- Common Options
add_field
add_tag
enable_metric
id
periodic_flush
remove_field
remove_tag