jdbc_streaming
editjdbc_streaming
edit- Version: 1.0.0
- Released on: March 27, 2017
- Changelog
Installation
editFor plugins not bundled by default, it is easy to install by running bin/logstash-plugin install logstash-filter-jdbc_streaming
. See Working with plugins for more details.
Getting Help
editFor questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Description
editEnrich Logstash events with your database data. This filter executes a SQL query and stores the result set in the field
specified as target
. It will cache the results locally in an LRU cache with a configurable expiry time.
For example you can load a row based on an id from an event
filter { jdbc_streaming { jdbc_driver_library => "/path/to/mysql-connector-java-5.1.34-bin.jar" jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_connection_string => ""jdbc:mysql://localhost:3306/mydatabase" jdbc_user => "me" jdbc_password => "secret" statement => "select * from WORLD.COUNTRY WHERE Code = :code" parameters => { "code" => "country_code"} target => "country_details" } }
Synopsis
editThis plugin supports the following configuration options:
Required configuration options:
jdbc_streaming { jdbc_connection_string => ... jdbc_driver_class => ... statement => ... target => ... }
Available configuration options:
Setting | Input type | Required |
---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
Yes |
||
Yes |
||
a valid filesystem path |
No |
|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
Yes |
||
No |
||
No |
||
Yes |
||
No |
Details
edit
add_field
edit- Value type is hash
-
Default value is
{}
If this filter is successful, add any arbitrary fields to this event.
Field names can be dynamic and include parts of the event using the %{field}
.
Example:
filter { jdbc_streaming { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" } } } [source,ruby] # You can also add multiple fields at once: filter { jdbc_streaming { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" "new_field" => "new_static_value" } } }
If the event has field "somefield" == "hello"
this filter, on success,
would add field foo_hello
if it is present, with the
value above and the %{host}
piece replaced with that value from the
event. The second example would also add a hardcoded field.
add_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, add arbitrary tags to the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { jdbc_streaming { add_tag => [ "foo_%{somefield}" ] } } [source,ruby] # You can also add multiple tags at once: filter { jdbc_streaming { add_tag => [ "foo_%{somefield}", "taggedy_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would add a tag foo_hello
(and the second example would of course add a taggedy_tag
tag).
cache_expiration
edit- Value type is number
-
Default value is
5.0
The minimum number of seconds any entry should remain in the cache, defaults to 5 seconds
A numeric value, you can use decimals for example { "cache_expiration" => 0.25 }
If there are transient jdbc errors the cache will store empty results for a given
parameter set and bypass the jbdc lookup, this merges the default_hash into the event, until
the cache entry expires, then the jdbc lookup will be tried again for the same parameters
Conversely, while the cache contains valid results any external problem that would cause
jdbc errors, will not be noticed for the cache_expiration period.
cache_size
edit- Value type is number
-
Default value is
500
The maximum number of cache entries are stored, defaults to 500 entries The least recently used entry will be evicted
default_hash
edit- Value type is hash
-
Default value is
{}
Define a default object to use when lookup fails to return a matching row. ensure that the key names of this object match the columns from the statement
enable_metric
edit- Value type is boolean
-
Default value is
true
Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
id
edit- Value type is string
- There is no default value for this setting.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.
It is strongly recommended to set this ID in your configuration. This is particularly useful
when you have two or more plugins of the same type, for example, if you have 2 grok filters.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
output { stdout { id => "my_plugin_id" } }
jdbc_connection_string
edit- This is a required setting.
- Value type is string
- There is no default value for this setting.
JDBC connection string
jdbc_driver_class
edit- This is a required setting.
- Value type is string
- There is no default value for this setting.
JDBC driver class to load, for example "oracle.jdbc.OracleDriver" or "org.apache.derby.jdbc.ClientDriver"
jdbc_driver_library
edit- Value type is path
- There is no default value for this setting.
Tentative of abstracting JDBC logic to a mixin for potential reuse in other plugins (input/output) This method is called when someone includes this module Add these methods to the base given. JDBC driver library path to third party driver library.
jdbc_validate_connection
edit- Value type is boolean
-
Default value is
false
Connection pool configuration. Validate connection before use.
jdbc_validation_timeout
edit- Value type is number
-
Default value is
3600
Connection pool configuration. How often to validate a connection (in seconds)
parameters
edit- Value type is hash
-
Default value is
{}
Hash of query parameter, for example { "id" => "id_field" }
periodic_flush
edit- Value type is boolean
-
Default value is
false
Call the filter flush method at regular interval. Optional.
remove_field
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the %{field} Example:
filter { jdbc_streaming { remove_field => [ "foo_%{somefield}" ] } } [source,ruby] # You can also remove multiple fields at once: filter { jdbc_streaming { remove_field => [ "foo_%{somefield}", "my_extraneous_field" ] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the field with name foo_hello
if it is present. The second
example would remove an additional, non-dynamic field.
remove_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary tags from the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { jdbc_streaming { remove_tag => [ "foo_%{somefield}" ] } } [source,ruby] # You can also remove multiple tags at once: filter { jdbc_streaming { remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the tag foo_hello
if it is present. The second example
would remove a sad, unwanted tag as well.
statement
edit- This is a required setting.
- Value type is string
- There is no default value for this setting.
Statement to execute. To use parameters, use named parameter syntax, for example "SELECT * FROM MYTABLE WHERE ID = :id"
tag_on_default_use
edit- Value type is array
-
Default value is
["_jdbcstreamingdefaultsused"]
Append values to the tags
field if no record was found and default values were used
tag_on_failure
edit- Value type is array
-
Default value is
["_jdbcstreamingfailure"]
Append values to the tags
field if sql error occured