- Logstash Reference: other versions:
- Logstash Introduction
- Getting Started with Logstash
- How Logstash Works
- Setting Up and Running Logstash
- Logstash Directory Layout
- Logstash Configuration Files
- logstash.yml
- Secrets keystore for secure settings
- Running Logstash from the Command Line
- Running Logstash as a Service on Debian or RPM
- Running Logstash on Docker
- Configuring Logstash for Docker
- Running Logstash on Windows
- Logging
- Shutting Down Logstash
- Setting Up X-Pack
- Upgrading Logstash
- Configuring Logstash
- Structure of a Config File
- Accessing Event Data and Fields in the Configuration
- Using Environment Variables in the Configuration
- Logstash Configuration Examples
- Multiple Pipelines
- Pipeline-to-Pipeline Communication (Beta)
- Reloading the Config File
- Managing Multiline Events
- Glob Pattern Support
- Converting Ingest Node Pipelines
- Logstash-to-Logstash Communication
- Centralized Pipeline Management
- X-Pack security
- X-Pack Settings
- Managing Logstash
- Working with Logstash Modules
- Working with Filebeat Modules
- Data Resiliency
- Transforming Data
- Deploying and Scaling Logstash
- Performance Tuning
- Monitoring Logstash with APIs
- Monitoring Logstash with X-Pack
- Working with plugins
- Input plugins
- azure_event_hubs
- beats
- cloudwatch
- couchdb_changes
- dead_letter_queue
- elasticsearch
- exec
- file
- ganglia
- gelf
- generator
- github
- google_cloud_storage
- google_pubsub
- graphite
- heartbeat
- http
- http_poller
- imap
- irc
- java_generator
- java_stdin
- jdbc
- jms
- jmx
- kafka
- kinesis
- log4j
- lumberjack
- meetup
- pipe
- puppet_facter
- rabbitmq
- redis
- relp
- rss
- s3
- salesforce
- snmp
- snmptrap
- sqlite
- sqs
- stdin
- stomp
- syslog
- tcp
- udp
- unix
- varnishlog
- websocket
- wmi
- xmpp
- Output plugins
- boundary
- circonus
- cloudwatch
- csv
- datadog
- datadog_metrics
- elastic_app_search
- elasticsearch
- exec
- file
- ganglia
- gelf
- google_bigquery
- google_cloud_storage
- google_pubsub
- graphite
- graphtastic
- http
- influxdb
- irc
- java_sink
- java_stdout
- juggernaut
- kafka
- librato
- loggly
- lumberjack
- metriccatcher
- mongodb
- nagios
- nagios_nsca
- opentsdb
- pagerduty
- pipe
- rabbitmq
- redis
- redmine
- riak
- riemann
- s3
- sns
- solr_http
- sqs
- statsd
- stdout
- stomp
- syslog
- tcp
- timber
- udp
- webhdfs
- websocket
- xmpp
- zabbix
- Filter plugins
- aggregate
- alter
- bytes
- cidr
- cipher
- clone
- csv
- date
- de_dot
- dissect
- dns
- drop
- elapsed
- elasticsearch
- environment
- extractnumbers
- fingerprint
- geoip
- grok
- http
- i18n
- java_uuid
- jdbc_static
- jdbc_streaming
- json
- json_encode
- kv
- memcached
- metricize
- metrics
- mutate
- prune
- range
- ruby
- sleep
- split
- syslog_pri
- threats_classifier
- throttle
- tld
- translate
- truncate
- urldecode
- useragent
- uuid
- xml
- Codec plugins
- Tips and Best Practices
- Troubleshooting Common Problems
- Contributing to Logstash
- How to write a Logstash input plugin
- How to write a Logstash codec plugin
- How to write a Logstash filter plugin
- How to write a Logstash output plugin
- Documenting your plugin
- Contributing a Patch to a Logstash Plugin
- Logstash Plugins Community Maintainer Guide
- Submitting your plugin to RubyGems.org and the logstash-plugins repository
- Contributing a Java Plugin
- Glossary of Terms
- Breaking Changes
- Release Notes
- Logstash 7.3.2 Release Notes
- Logstash 7.3.1 Release Notes
- Logstash 7.3.0 Release Notes
- Logstash 7.2.0 Release Notes
- Logstash 7.1.1 Release Notes
- Logstash 7.1.0 Release Notes
- Logstash 7.0.1 Release Notes
- Logstash 7.0.0 Release Notes
- Logstash 7.0.0-rc2 Release Notes
- Logstash 7.0.0-rc1 Release Notes
- Logstash 7.0.0-beta1 Release Notes
- Logstash 7.0.0-alpha2 Release Notes
- Logstash 7.0.0-alpha1 Release Notes
Jdbc_streaming filter plugin
editJdbc_streaming filter plugin
edit- Plugin version: v1.0.7
- Released on: 2019-05-30
- Changelog
For other versions, see the Versioned plugin docs.
Getting Help
editFor questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Description
editThis filter executes a SQL query and store the result set in the field
specified as target
.
It will cache the results locally in an LRU cache with expiry.
For example, you can load a row based on an id in the event.
filter { jdbc_streaming { jdbc_driver_library => "/path/to/mysql-connector-java-5.1.34-bin.jar" jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_connection_string => "jdbc:mysql://localhost:3306/mydatabase" jdbc_user => "me" jdbc_password => "secret" statement => "select * from WORLD.COUNTRY WHERE Code = :code" parameters => { "code" => "country_code"} target => "country_details" } }
Jdbc_streaming Filter Configuration Options
editThis plugin supports the following configuration options plus the Common Options described later.
Setting | Input type | Required |
---|---|---|
No |
||
No |
||
No |
||
Yes |
||
Yes |
||
a valid filesystem path |
No |
|
No |
||
No |
||
No |
||
No |
||
No |
||
Yes |
||
No |
||
No |
||
Yes |
||
No |
Also see Common Options for a list of options supported by all filter plugins.
cache_expiration
edit- Value type is number
-
Default value is
5.0
The minimum number of seconds any entry should remain in the cache. Defaults to 5 seconds.
A numeric value. You can use decimals for example: cache_expiration => 0.25
.
If there are transient jdbc errors, the cache will store empty results for a
given parameter set and bypass the jbdc lookup. This will merge the default_hash
into the event until the cache entry expires. Then the jdbc lookup will be tried
again for the same parameters. Conversely, while the cache contains valid results,
any external problem that would cause jdbc errors will not be noticed for the
cache_expiration period.
cache_size
edit- Value type is number
-
Default value is
500
The maximum number of cache entries that will be stored. Defaults to 500 entries. The least recently used entry will be evicted.
default_hash
edit- Value type is hash
-
Default value is
{}
Define a default object to use when lookup fails to return a matching row. Ensure that the key names of this object match the columns from the statement.
jdbc_connection_string
edit- This is a required setting.
- Value type is string
- There is no default value for this setting.
JDBC connection string
jdbc_driver_class
edit- This is a required setting.
- Value type is string
- There is no default value for this setting.
JDBC driver class to load, for example "oracle.jdbc.OracleDriver" or "org.apache.derby.jdbc.ClientDriver"
jdbc_driver_library
edit- Value type is path
- There is no default value for this setting.
Tentative of abstracting JDBC logic to a mixin for potential reuse in other plugins (input/output). This method is called when someone includes this module. Add these methods to the base given. JDBC driver library path to third party driver library.
jdbc_validate_connection
edit- Value type is boolean
-
Default value is
false
Connection pool configuration. Validate connection before use.
jdbc_validation_timeout
edit- Value type is number
-
Default value is
3600
Connection pool configuration. How often to validate a connection (in seconds).
parameters
edit- Value type is hash
-
Default value is
{}
Hash of query parameter, for example { "id" => "id_field" }
.
statement
edit- This is a required setting.
- Value type is string
- There is no default value for this setting.
Statement to execute. To use parameters, use named parameter syntax, for example "SELECT * FROM MYTABLE WHERE ID = :id".
tag_on_default_use
edit- Value type is array
-
Default value is
["_jdbcstreamingdefaultsused"]
Append values to the tags
field if no record was found and default values were used.
tag_on_failure
edit- Value type is array
-
Default value is
["_jdbcstreamingfailure"]
Append values to the tags
field if sql error occurred.
Common Options
editThe following configuration options are supported by all filter plugins:
Setting | Input type | Required |
---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
add_field
edit- Value type is hash
-
Default value is
{}
If this filter is successful, add any arbitrary fields to this event.
Field names can be dynamic and include parts of the event using the %{field}
.
Example:
filter { jdbc_streaming { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" } } }
# You can also add multiple fields at once: filter { jdbc_streaming { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" "new_field" => "new_static_value" } } }
If the event has field "somefield" == "hello"
this filter, on success,
would add field foo_hello
if it is present, with the
value above and the %{host}
piece replaced with that value from the
event. The second example would also add a hardcoded field.
add_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, add arbitrary tags to the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { jdbc_streaming { add_tag => [ "foo_%{somefield}" ] } }
# You can also add multiple tags at once: filter { jdbc_streaming { add_tag => [ "foo_%{somefield}", "taggedy_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would add a tag foo_hello
(and the second example would of course add a taggedy_tag
tag).
enable_metric
edit- Value type is boolean
-
Default value is
true
Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
id
edit- Value type is string
- There is no default value for this setting.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.
It is strongly recommended to set this ID in your configuration. This is particularly useful
when you have two or more plugins of the same type, for example, if you have 2 jdbc_streaming filters.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
filter { jdbc_streaming { id => "ABC" } }
periodic_flush
edit- Value type is boolean
-
Default value is
false
Call the filter flush method at regular interval. Optional.
remove_field
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the %{field} Example:
filter { jdbc_streaming { remove_field => [ "foo_%{somefield}" ] } }
# You can also remove multiple fields at once: filter { jdbc_streaming { remove_field => [ "foo_%{somefield}", "my_extraneous_field" ] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the field with name foo_hello
if it is present. The second
example would remove an additional, non-dynamic field.
remove_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary tags from the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { jdbc_streaming { remove_tag => [ "foo_%{somefield}" ] } }
# You can also remove multiple tags at once: filter { jdbc_streaming { remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the tag foo_hello
if it is present. The second example
would remove a sad, unwanted tag as well.
On this page
- Getting Help
- Description
- Jdbc_streaming Filter Configuration Options
cache_expiration
cache_size
default_hash
jdbc_connection_string
jdbc_driver_class
jdbc_driver_library
jdbc_password
jdbc_user
jdbc_validate_connection
jdbc_validation_timeout
parameters
statement
tag_on_default_use
tag_on_failure
target
use_cache
- Common Options
add_field
add_tag
enable_metric
id
periodic_flush
remove_field
remove_tag