multiline
editmultiline
editThis filter has been deprecated in favor of multiline-codec. Multiline filter is not thread-safe and cannot handle messages from multiple streams.
This filter will collapse multiline messages from a single source into one Logstash event.
The original goal of this filter was to allow joining of multi-line messages from files into a single event. For example - joining java exception and stacktrace messages into a single event.
This filter will not work with multiple worker threads -w 2
on the logstash command line.
The config looks like this:
filter { multiline { type => "type" pattern => "pattern, a regexp" negate => boolean what => "previous" or "next" } }
The pattern
should be a regexp which matches what you believe to be an indicator
that the field is part of an event consisting of multiple lines of log data.
The what
must be previous
or next
and indicates the relation
to the multi-line event.
The negate
can be true
or false
(defaults to false
). If true
, a
message not matching the pattern will constitute a match of the multiline
filter and the what
will be applied. (vice-versa is also true)
For example, Java stack traces are multiline and usually have the message starting at the far-left, with each subsequent line indented. Do this:
filter { multiline { type => "somefiletype" pattern => "^\s" what => "previous" } }
This says that any line starting with whitespace belongs to the previous line.
Another example is C line continuations (backslash). Here’s how to do that:
filter { multiline { type => "somefiletype " pattern => "\\$" what => "next" } }
This says that any line ending with a backslash should be combined with the following line.
Synopsis
editThis plugin supports the following configuration options:
Required configuration options:
multiline { pattern => ... what => ... }
Available configuration options:
Setting | Input type | Required | Default value |
---|---|---|---|
No |
|
||
No |
|
||
No |
|
||
No |
|
||
No |
|
||
Yes |
|||
No |
|
||
No |
|
||
No |
|
||
No |
|
||
No |
|
||
No |
|
||
string, one of |
Yes |
Details
edit
add_field
edit- Value type is hash
-
Default value is
{}
If this filter is successful, add any arbitrary fields to this event.
Field names can be dynamic and include parts of the event using the %{field}
.
Example:
filter { multiline { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" } } } [source,ruby] # You can also add multiple fields at once: filter { multiline { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" "new_field" => "new_static_value" } } }
If the event has field "somefield" == "hello"
this filter, on success,
would add field foo_hello
if it is present, with the
value above and the %{host}
piece replaced with that value from the
event. The second example would also add a hardcoded field.
add_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, add arbitrary tags to the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { multiline { add_tag => [ "foo_%{somefield}" ] } } [source,ruby] # You can also add multiple tags at once: filter { multiline { add_tag => [ "foo_%{somefield}", "taggedy_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would add a tag foo_hello
(and the second example would of course add a taggedy_tag
tag).
allow_duplicates
edit- Value type is boolean
-
Default value is
true
Allow duplcate values on the source field.
max_age
edit- Value type is number
-
Default value is
5
The maximum age an event can be (in seconds) before it is automatically flushed.
pattern
edit- This is a required setting.
- Value type is string
- There is no default value for this setting.
The regular expression to match.
patterns_dir
edit- Value type is array
-
Default value is
[]
Logstash ships by default with a bunch of patterns, so you don’t necessarily need to define this yourself unless you are adding additional patterns.
Pattern files are plain text with format:
NAME PATTERN
For example:
NUMBER \d+
periodic_flush
edit- Value type is boolean
-
Default value is
true
Call the filter flush method at regular interval. Optional.
remove_field
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the %{field} Example:
filter { multiline { remove_field => [ "foo_%{somefield}" ] } } [source,ruby] # You can also remove multiple fields at once: filter { multiline { remove_field => [ "foo_%{somefield}", "my_extraneous_field" ] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the field with name foo_hello
if it is present. The second
example would remove an additional, non-dynamic field.
remove_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary tags from the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { multiline { remove_tag => [ "foo_%{somefield}" ] } } [source,ruby] # You can also remove multiple tags at once: filter { multiline { remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the tag foo_hello
if it is present. The second example
would remove a sad, unwanted tag as well.
source
edit- Value type is string
-
Default value is
"message"
The field name to execute the pattern match on.
stream_identity
edit- Value type is string
-
Default value is
"%{host}.%{path}.%{type}"
The stream identity is how the multiline filter determines which stream an event belongs to. This is generally used for differentiating, say, events coming from multiple files in the same file input, or multiple connections coming from a tcp input.
The default value here is usually what you want, but there are some cases
where you want to change it. One such example is if you are using a tcp
input with only one client connecting at any time. If that client
reconnects (due to error or client restart), then logstash will identify
the new connection as a new stream and break any multiline goodness that
may have occurred between the old and new connection. To solve this use
case, you can use %{@source_host}.%{@type}
instead.
what
edit- This is a required setting.
-
Value can be any of:
previous
,next
- There is no default value for this setting.
If the pattern matched, does event belong to the next or previous event?