- Logstash Reference: other versions:
- Logstash Introduction
- Getting Started with Logstash
- How Logstash Works
- Setting Up and Running Logstash
- Setting Up X-Pack
- Breaking changes
- Upgrading Logstash
- Configuring Logstash
- Working with Logstash Modules
- Working with Filebeat Modules
- Data Resiliency
- Transforming Data
- Deploying and Scaling Logstash
- Performance Tuning
- Monitoring Logstash
- Working with plugins
- Input plugins
- Beats input plugin
- Cloudwatch input plugin
- Couchdb_changes input plugin
- Dead_letter_queue input plugin
- Drupal_dblog input plugin
- Elasticsearch input plugin
- Eventlog output plugin
- Exec input plugin
- File input plugin
- Ganglia input plugin
- Gelf input plugin
- Gemfire input plugin
- Generator input plugin
- Github input plugin
- Google_pubsub input plugin
- Graphite input plugin
- Heartbeat input plugin
- Http input plugin
- Http_poller input plugin
- Imap input plugin
- Irc input plugin
- Jdbc input plugin
- Jms input plugin
- Jmx input plugin
- Kafka input plugin
- Kinesis input plugin
- Log4j input plugin
- Lumberjack input plugin
- Meetup input plugin
- Pipe input plugin
- Puppet_facter input plugin
- Rabbitmq input plugin
- rackspace input plugin
- Redis input plugin
- Relp input plugin
- Rss input plugin
- S3 input plugin
- Salesforce input plugin
- Snmptrap input plugin
- Sqlite input plugin
- Sqs input plugin
- Stdin input plugin
- Stomp input plugin
- Syslog input plugin
- Tcp input plugin
- Twitter input plugin
- Udp input plugin
- Unix input plugin
- Varnishlog input plugin
- Websocket input plugin
- Wmi input plugin
- Xmpp input plugin
- Zenoss input plugin
- Zeromq input plugin
- Output plugins
- Boundary output plugin
- Circonus output plugin
- Cloudwatch output plugin
- Csv output plugin
- Datadog output plugin
- Datadog_metrics output plugin
- Elasticsearch output plugin
- Email output plugin
- Exec output plugin
- File output plugin
- Ganglia output plugin
- Gelf output plugin
- Google BigQuery output plugin
- Google_cloud_storage output plugin
- Graphite output plugin
- Graphtastic output plugin
- Http output plugin
- Influxdb output plugin
- Irc output plugin
- Jira output plugin
- Juggernaut output plugin
- Kafka output plugin
- Librato output plugin
- Loggly output plugin
- Lumberjack output plugin
- Metriccatcher output plugin
- Mongodb output plugin
- Nagios output plugin
- Nagios_nsca output plugin
- Newrelic output plugin
- Opentsdb output plugin
- Pagerduty output plugin
- Pipe output plugin
- Rabbitmq output plugin
- Rackspace output plugin
- Redis output plugin
- Redmine output plugin
- Riak output plugin
- Riemann output plugin
- S3 output plugin
- Sns output plugin
- Solr_http output plugin
- Sqs output plugin
- Statsd output plugin
- Stdout output plugin
- Stomp output plugin
- Syslog output plugin
- Tcp output plugin
- Udp output plugin
- Webhdfs output plugin
- Websocket output plugin
- Xmpp output plugin
- Zabbix output plugin
- Zeromq output plugin
- Filter plugins
- Aggregate filter plugin
- Alter filter plugin
- Anonymize filter plugin
- Cidr filter plugin
- Cipher filter plugin
- Clone filter plugin
- Collate filter plugin
- Csv filter plugin
- Date filter plugin
- De_dot filter plugin
- Dissect filter plugin
- Dns filter plugin
- Drop filter plugin
- Elapsed filter plugin
- Elasticsearch filter plugin
- Environment filter plugin
- Extractnumbers filter plugin
- Fingerprint filter plugin
- Geoip filter plugin
- Grok filter plugin
- I18n filter plugin
- Jdbc_streaming filter plugin
- Json filter plugin
- Json_encode filter plugin
- Kv filter plugin
- Metaevent filter plugin
- Metricize filter plugin
- Metrics filter plugin
- Mutate filter plugin
- Oui filter plugin
- Prune filter plugin
- Punct filter plugin
- Range filter plugin
- Ruby filter plugin
- Sleep filter plugin
- Split filter plugin
- Syslog_pri filter plugin
- Throttle filter plugin
- Tld filter plugin
- Translate filter plugin
- Truncate filter plugin
- Urldecode filter plugin
- Useragent filter plugin
- Uuid filter plugin
- Xml filter plugin
- Yaml filter plugin
- Zeromq filter plugin
- Codec plugins
- Avro codec plugin
- Cef codec plugin
- Cloudfront codec plugin
- Cloudtrail codec plugin
- Collectd codec plugin
- Compress_spooler codec plugin
- Dots codec plugin
- Edn codec plugin
- Edn_lines codec plugin
- Es_bulk codec plugin
- Fluent codec plugin
- Graphite codec plugin
- Gzip_lines codec plugin
- Json codec plugin
- Json_lines codec plugin
- Line codec plugin
- Msgpack codec plugin
- Multiline codec plugin
- Netflow codec plugin
- Nmap codec plugin
- Oldlogstashjson codec plugin
- Plain codec plugin
- Protobuf codec plugin
- Rubydebug codec plugin
- Contributing to Logstash
- How to write a Logstash input plugin
- How to write a Logstash input plugin
- How to write a Logstash codec plugin
- How to write a Logstash filter plugin
- Contributing a Patch to a Logstash Plugin
- Logstash Plugins Community Maintainer Guide
- Submitting your plugin to RubyGems.org and the logstash-plugins repository
- Glossary of Terms
- Release Notes
- Logstash 5.6.16 Release Notes
- Logstash 5.6.15 Release Notes
- Logstash 5.6.14 Release Notes
- Logstash 5.6.13 Release Notes
- Logstash 5.6.12 Release Notes
- Logstash 5.6.11 Release Notes
- Logstash 5.6.10 Release Notes
- Logstash 5.6.9 Release Notes
- Logstash 5.6.8 Release Notes
- Logstash 5.6.7 Release Notes
- Logstash 5.6.6 Release Notes
- Logstash 5.6.5 Release Notes
- Logstash 5.6.4 Release Notes
- Logstash 5.6.3 Release Notes
- Logstash 5.6.2 Release Notes
- Logstash 5.6.1 Release Notes
- Logstash 5.6.0 Release Notes
Aggregate filter plugin
editAggregate filter plugin
edit- Plugin version: v2.9.0
- Released on: 2018-11-03
- Changelog
Installation
editFor plugins not bundled by default, it is easy to install by running bin/logstash-plugin install logstash-filter-aggregate
. See Working with plugins for more details.
Getting Help
editFor questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Description
editThe aim of this filter is to aggregate information available among several events (typically log lines) belonging to a same task, and finally push aggregated information into final task event.
You should be very careful to set Logstash filter workers to 1 (-w 1
flag) for this filter to work correctly
otherwise events may be processed out of sequence and unexpected results will occur.
Example #1
edit- with these given logs :
INFO - 12345 - TASK_START - start INFO - 12345 - SQL - sqlQuery1 - 12 INFO - 12345 - SQL - sqlQuery2 - 34 INFO - 12345 - TASK_END - end
- you can aggregate "sql duration" for the whole task with this configuration :
filter { grok { match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ] } if [logger] == "TASK_START" { aggregate { task_id => "%{taskid}" code => "map['sql_duration'] = 0" map_action => "create" } } if [logger] == "SQL" { aggregate { task_id => "%{taskid}" code => "map['sql_duration'] += event.get('duration')" map_action => "update" } } if [logger] == "TASK_END" { aggregate { task_id => "%{taskid}" code => "event.set('sql_duration', map['sql_duration'])" map_action => "update" end_of_task => true timeout => 120 } } }
- the final event then looks like :
{ "message" => "INFO - 12345 - TASK_END - end message", "sql_duration" => 46 }
the field sql_duration
is added and contains the sum of all sql queries durations.
Example #2 : no start event
edit- If you have the same logs than example #1, but without a start log :
INFO - 12345 - SQL - sqlQuery1 - 12 INFO - 12345 - SQL - sqlQuery2 - 34 INFO - 12345 - TASK_END - end
- you can also aggregate "sql duration" with a slightly different configuration :
filter { grok { match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ] } if [logger] == "SQL" { aggregate { task_id => "%{taskid}" code => "map['sql_duration'] ||= 0 ; map['sql_duration'] += event.get('duration')" } } if [logger] == "TASK_END" { aggregate { task_id => "%{taskid}" code => "event.set('sql_duration', map['sql_duration'])" end_of_task => true timeout => 120 } } }
- the final event is exactly the same than example #1
- the key point is the "||=" ruby operator. It allows to initialize sql_duration map entry to 0 only if this map entry is not already initialized
Example #3 : no end event
editThird use case: You have no specific end event.
A typical case is aggregating or tracking user behaviour. We can track a user by its ID through the events, however once the user stops interacting, the events stop coming in. There is no specific event indicating the end of the user’s interaction.
In this case, we can enable the option push_map_as_event_on_timeout to enable pushing the aggregation map as a new event when a timeout occurs. In addition, we can enable timeout_code to execute code on the populated timeout event. We can also add timeout_task_id_field so we can correlate the task_id, which in this case would be the user’s ID.
- Given these logs:
INFO - 12345 - Clicked One INFO - 12345 - Clicked Two INFO - 12345 - Clicked Three
- You can aggregate the amount of clicks the user did like this:
filter { grok { match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:user_id} - %{GREEDYDATA:msg_text}" ] } aggregate { task_id => "%{user_id}" code => "map['clicks'] ||= 0; map['clicks'] += 1;" push_map_as_event_on_timeout => true timeout_task_id_field => "user_id" timeout => 600 # 10 minutes timeout timeout_tags => ['_aggregatetimeout'] timeout_code => "event.set('several_clicks', event.get('clicks') > 1)" } }
- After ten minutes, this will yield an event like:
{ "user_id": "12345", "clicks": 3, "several_clicks": true, "tags": [ "_aggregatetimeout" ] }
Example #4 : no end event and tasks come one after the other
editFourth use case : like example #3, you have no specific end event, but also, tasks come one after the other.
That is to say : tasks are not interlaced. All task1 events come, then all task2 events come, …
In that case, you don’t want to wait task timeout to flush aggregation map.
- A typical case is aggregating results from jdbc input plugin.
-
Given that you have this SQL query :
SELECT country_name, town_name FROM town
- Using jdbc input plugin, you get these 3 events from :
{ "country_name": "France", "town_name": "Paris" } { "country_name": "France", "town_name": "Marseille" } { "country_name": "USA", "town_name": "New-York" }
- And you would like these 2 result events to push them into elasticsearch :
{ "country_name": "France", "towns": [ {"town_name": "Paris"}, {"town_name": "Marseille"} ] } { "country_name": "USA", "towns": [ {"town_name": "New-York"} ] }
-
You can do that using
push_previous_map_as_event
aggregate plugin option :
filter { aggregate { task_id => "%{country_name}" code => " map['country_name'] = event.get('country_name') map['towns'] ||= [] map['towns'] << {'town_name' => event.get('town_name')} event.cancel() " push_previous_map_as_event => true timeout => 3 } }
-
The key point is that each time aggregate plugin detects a new
country_name
, it pushes previous aggregate map as a new Logstash event, and then creates a new empty map for the next country - When 3s timeout comes, the last aggregate map is pushed as a new event
-
Finally, initial events (which are not aggregated) are dropped because useless (thanks to
event.cancel()
)
Example #5 : no end event and push events as soon as possible
editFifth use case: like example #3, there is no end event.
Events keep comming for an indefinite time and you want to push the aggregation map as soon as possible after the last user interaction without waiting for the timeout
.
This allows to have the aggregated events pushed closer to real time.
A typical case is aggregating or tracking user behaviour.
We can track a user by its ID through the events, however once the user stops interacting, the events stop coming in.
There is no specific event indicating the end of the user’s interaction.
The user ineraction will be considered as ended when no events for the specified user (task_id) arrive after the specified inactivity_timeout`.
If the user continues interacting for longer than timeout
seconds (since first event), the aggregation map will still be deleted and pushed as a new event when timeout occurs.
The difference with example #3 is that the events will be pushed as soon as the user stops interacting for inactivity_timeout
seconds instead of waiting for the end of timeout
seconds since first event.
In this case, we can enable the option push_map_as_event_on_timeout to enable pushing the aggregation map as a new event when inactivity timeout occurs.
In addition, we can enable timeout_code to execute code on the populated timeout event.
We can also add timeout_task_id_field so we can correlate the task_id, which in this case would be the user’s ID.
- Given these logs:
INFO - 12345 - Clicked One INFO - 12345 - Clicked Two INFO - 12345 - Clicked Three
- You can aggregate the amount of clicks the user did like this:
filter { grok { match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:user_id} - %{GREEDYDATA:msg_text}" ] } aggregate { task_id => "%{user_id}" code => "map['clicks'] ||= 0; map['clicks'] += 1;" push_map_as_event_on_timeout => true timeout_task_id_field => "user_id" timeout => 3600 # 1 hour timeout, user activity will be considered finished one hour after the first event, even if events keep comming inactivity_timeout => 300 # 5 minutes timeout, user activity will be considered finished if no new events arrive 5 minutes after the last event timeout_tags => ['_aggregatetimeout'] timeout_code => "event.set('several_clicks', event.get('clicks') > 1)" } }
- After five minutes of inactivity or one hour since first event, this will yield an event like:
{ "user_id": "12345", "clicks": 3, "several_clicks": true, "tags": [ "_aggregatetimeout" ] }
How it works
edit- the filter needs a "task_id" to correlate events (log lines) of a same task
- at the task beginning, filter creates a map, attached to task_id
- for each event, you can execute code using event and map (for instance, copy an event field to map)
- in the final event, you can execute a last code (for instance, add map data to final event)
-
after the final event, the map attached to task is deleted (thanks to
end_of_task => true
) - an aggregate map is tied to one task_id value which is tied to one task_id pattern. So if you have 2 filters with different task_id patterns, even if you have same task_id value, they won’t share the same aggregate map.
- in one filter configuration, it is recommanded to define a timeout option to protect the feature against unterminated tasks. It tells the filter to delete expired maps
- if no timeout is defined, by default, all maps older than 1800 seconds are automatically deleted
- all timeout options have to be defined in only one aggregate filter per task_id pattern (per pipeline). Timeout options are : timeout, inactivity_timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_timestamp_field, timeout_task_id_field, timeout_tags
-
if
code
execution raises an exception, the error is logged and event is tagged _aggregateexception
Use Cases
edit- extract some cool metrics from task logs and push them into task final log event (like in example #1 and #2)
- extract error information in any task log line, and push it in final task event (to get a final event with all error information if any)
- extract all back-end calls as a list, and push this list in final task event (to get a task profile)
- extract all http headers logged in several lines to push this list in final task event (complete http request info)
- for every back-end call, collect call details available on several lines, analyse it and finally tag final back-end call log line (error, timeout, business-warning, …)
- Finally, task id can be any correlation id matching your need : it can be a session id, a file path, …
Aggregate Filter Configuration Options
editThis plugin supports the following configuration options plus the Common Options described later.
Setting | Input type | Required |
---|---|---|
string, a valid filesystem path |
No |
|
Yes |
||
No |
||
No |
||
string, one of |
No |
|
No |
||
No |
||
Yes |
||
No |
||
No |
||
No |
||
No |
||
No |
Also see Common Options for a list of options supported by all filter plugins.
aggregate_maps_path
edit- Value type is string
- There is no default value for this setting.
The path to file where aggregate maps are stored when Logstash stops and are loaded from when Logstash starts.
If not defined, aggregate maps will not be stored at Logstash stop and will be lost. Must be defined in only one aggregate filter per pipeline (as aggregate maps are shared at pipeline level).
Example:
filter { aggregate { aggregate_maps_path => "/path/to/.aggregate_maps" } }
code
edit- This is a required setting.
- Value type is string
- There is no default value for this setting.
The code to execute to update aggregated map, using current event.
Or on the contrary, the code to execute to update event, using aggregated map.
Available variables are :
event
: current Logstash event
map
: aggregated map associated to task_id
, containing key/value pairs. Data structure is a ruby Hash
map_meta
: meta informations associated to aggregate map. It allows to set a custom timeout
or inactivity_timeout
.
It allows also to get creation_timestamp
, lastevent_timestamp
and task_id
.
When option push_map_as_event_on_timeout=true, if you set map_meta.timeout=0
in code
block, then aggregated map is immediately pushed as a new event.
Example:
filter { aggregate { code => "map['sql_duration'] += event.get('duration')" } }
end_of_task
edit- Value type is boolean
-
Default value is
false
Tell the filter that task is ended, and therefore, to delete aggregate map after code execution.
inactivity_timeout
edit- Value type is number
- There is no default value for this setting.
The amount of seconds (since the last event) after which a task is considered as expired.
When timeout occurs for a task, its aggregate map is evicted.
If push_map_as_event_on_timeout or push_previous_map_as_event is set to true, the task aggregation map is pushed as a new Logstash event.
inactivity_timeout
can be defined for each "task_id" pattern.
inactivity_timeout
must be lower than timeout
.
map_action
edit- Value type is string
-
Default value is
"create_or_update"
Tell the filter what to do with aggregate map.
"create"
: create the map, and execute the code only if map wasn’t created before
"update"
: doesn’t create the map, and execute the code only if map was created before
"create_or_update"
: create the map if it wasn’t created before, execute the code in all cases
push_map_as_event_on_timeout
edit- Value type is boolean
-
Default value is
false
When this option is enabled, each time a task timeout is detected, it pushes task aggregation map as a new Logstash event. This enables to detect and process task timeouts in Logstash, but also to manage tasks that have no explicit end event.
push_previous_map_as_event
edit- Value type is boolean
-
Default value is
false
When this option is enabled, each time aggregate plugin detects a new task id, it pushes previous aggregate map as a new Logstash event, and then creates a new empty map for the next task.
this option works fine only if tasks come one after the other. It means : all task1 events, then all task2 events, etc…
task_id
edit- This is a required setting.
- Value type is string
- There is no default value for this setting.
The expression defining task ID to correlate logs.
This value must uniquely identify the task.
Example:
filter { aggregate { task_id => "%{type}%{my_task_id}" } }
timeout
edit- Value type is number
-
Default value is
1800
The amount of seconds (since the first event) after which a task is considered as expired.
When timeout occurs for a task, its aggregate map is evicted.
If push_map_as_event_on_timeout or push_previous_map_as_event is set to true, the task aggregation map is pushed as a new Logstash event.
Timeout can be defined for each "task_id" pattern.
timeout_code
edit- Value type is string
- There is no default value for this setting.
The code to execute to complete timeout generated event, when 'push_map_as_event_on_timeout'
or 'push_previous_map_as_event'
is set to true.
The code block will have access to the newly generated timeout event that is pre-populated with the aggregation map.
If 'timeout_task_id_field'
is set, the event is also populated with the task_id value
Example:
filter { aggregate { timeout_code => "event.set('state', 'timeout')" } }
timeout_tags
edit- Value type is array
-
Default value is
[]
Defines tags to add when a timeout event is generated and yield
Example:
filter { aggregate { timeout_tags => ["aggregate_timeout"] } }
timeout_task_id_field
edit- Value type is string
- There is no default value for this setting.
This option indicates the timeout generated event’s field where the current "task_id" value will be set. This can help to correlate which tasks have been timed out.
By default, if this option is not set, task id value won’t be set into timeout generated event.
Example:
filter { aggregate { timeout_task_id_field => "task_id" } }
timeout_timestamp_field
edit- Value type is string
- There is no default value for this setting.
By default, timeout is computed using system time, where Logstash is running.
When this option is set, timeout is computed using event timestamp field indicated in this option. It means that when a first event arrives on aggregate filter and induces a map creation, map creation time will be equal to this event timestamp. Then, each time a new event arrives on aggregate filter, event timestamp is compared to map creation time to check if timeout happened.
This option is particularly useful when processing old logs with option push_map_as_event_on_timeout => true
.
It lets to generate aggregated events based on timeout on old logs, where system time is inappropriate.
Warning : so that this option works fine, it must be set on first aggregate filter.
Example:
filter { aggregate { timeout_timestamp_field => "@timestamp" } }
Common Options
editThe following configuration options are supported by all filter plugins:
Setting | Input type | Required |
---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
add_field
edit- Value type is hash
-
Default value is
{}
If this filter is successful, add any arbitrary fields to this event.
Field names can be dynamic and include parts of the event using the %{field}
.
Example:
filter { aggregate { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" } } }
# You can also add multiple fields at once: filter { aggregate { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" "new_field" => "new_static_value" } } }
If the event has field "somefield" == "hello"
this filter, on success,
would add field foo_hello
if it is present, with the
value above and the %{host}
piece replaced with that value from the
event. The second example would also add a hardcoded field.
add_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, add arbitrary tags to the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { aggregate { add_tag => [ "foo_%{somefield}" ] } }
# You can also add multiple tags at once: filter { aggregate { add_tag => [ "foo_%{somefield}", "taggedy_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would add a tag foo_hello
(and the second example would of course add a taggedy_tag
tag).
enable_metric
edit- Value type is boolean
-
Default value is
true
Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
id
edit- Value type is string
- There is no default value for this setting.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.
It is strongly recommended to set this ID in your configuration. This is particularly useful
when you have two or more plugins of the same type, for example, if you have 2 aggregate filters.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
filter { aggregate { id => "ABC" } }
periodic_flush
edit- Value type is boolean
-
Default value is
false
Call the filter flush method at regular interval. Optional.
remove_field
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the %{field} Example:
filter { aggregate { remove_field => [ "foo_%{somefield}" ] } }
# You can also remove multiple fields at once: filter { aggregate { remove_field => [ "foo_%{somefield}", "my_extraneous_field" ] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the field with name foo_hello
if it is present. The second
example would remove an additional, non-dynamic field.
remove_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary tags from the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { aggregate { remove_tag => [ "foo_%{somefield}" ] } }
# You can also remove multiple tags at once: filter { aggregate { remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the tag foo_hello
if it is present. The second example
would remove a sad, unwanted tag as well.
On this page
- Installation
- Getting Help
- Description
- Example #1
- Example #2 : no start event
- Example #3 : no end event
- Example #4 : no end event and tasks come one after the other
- Example #5 : no end event and push events as soon as possible
- How it works
- Use Cases
- Aggregate Filter Configuration Options
aggregate_maps_path
code
end_of_task
inactivity_timeout
map_action
push_map_as_event_on_timeout
push_previous_map_as_event
task_id
timeout
timeout_code
timeout_tags
timeout_task_id_field
timeout_timestamp_field
- Common Options
add_field
add_tag
enable_metric
id
periodic_flush
remove_field
remove_tag