Elastic Integration filter plugin v0.1.11
editElastic Integration filter plugin v0.1.11
edit- Plugin version: v0.1.11
- Released on: 2024-07-02
- Changelog
For other versions, see the overview list.
To learn more about Logstash, see the Logstash Reference.
Getting help
editFor questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Description
editUse this filter to process Elastic integrations powered by Elasticsearch Ingest Node in Logstash.
When you configure this filter to point to an Elasticsearch cluster, it detects which ingest pipeline (if any) should be executed for each event,
using an explicitly-defined pipeline_name
or auto-detecting the event’s data-stream and its default pipeline.
It then loads that pipeline’s definition from Elasticsearch and run that pipeline inside Logstash without transmitting the event to Elasticsearch.
Events that are successfully handled by their ingest pipeline will have [@metadata][target_ingest_pipeline]
set to _none
so that any downstream Elasticsearch output in the Logstash pipeline will avoid running the event’s default pipeline again in Elasticsearch.
Some multi-pipeline configurations such as logstash-to-logstash over http(s) do not maintain the state of [@metadata]
fields.
In these setups, you may need to explicitly configure your downstream pipeline’s Elasticsearch output with pipeline => "_none"
to avoid re-running the default pipeline.
Events that fail ingest pipeline processing will be tagged with _ingest_pipeline_failure
, and their [@metadata][_ingest_pipeline_failure]
will be populated with details as a key/value map.
This plugin requires minimum Java 17 and Logstash 8.7.0 versions.
Using filter-elastic_integration
with output-elasticsearch
editElastic Integrations are designed to work with data streams and ECS-compatible output.
Be sure that these features are enabled in the output-elasticsearch
plugin.
-
Set
data-stream
totrue
.
(Check out Data streams for additional data streams settings.) -
Set
ecs-compatibility
tov1
orv8
.
Check out the output-elasticsearch
plugin docs for additional settings.
Minimum configuration
editYou will need to configure this plugin to connect to Elasticsearch, and may need to also need to provide local GeoIp databases.
filter { elastic_integration { cloud_id => "YOUR_CLOUD_ID_HERE" cloud_auth => "YOUR_CLOUD_AUTH_HERE" geoip_database_directory => "/etc/your/geoip-databases" } }
Read on for a guide to configuration, or jump to the complete list of configuration options.
Connecting to Elasticsearch
editThis plugin communicates with Elasticsearch to identify which ingest pipeline should be run for a given event, and to retrieve the ingest pipeline definitions themselves. You must configure this plugin to point to Elasticsearch using exactly one of:
Communication will be made securely over SSL unless you explicitly configure this plugin otherwise.
You may need to configure how this plugin establishes trust of the server that responds, and will likely need to configure how this plugin presents its own identity or credentials.
SSL Trust Configuration
editWhen communicating over SSL, this plugin fully-validates the proof-of-identity presented by Elasticsearch using the system trust store. You can provide an alternate source of trust with one of:
-
A PEM-formatted list of trusted certificate authorities (see
ssl_certificate_authorities
) -
A JKS- or PKCS12-formatted Keystore containing trusted certificates (see
ssl_truststore_path
)
You can also configure which aspects of the proof-of-identity are verified (see ssl_verification_mode
).
SSL Identity Configuration
editWhen communicating over SSL, you can also configure this plugin to present a certificate-based proof-of-identity to the Elasticsearch cluster it connects to using one of:
-
A PKCS8 Certificate/Key pair (see
ssl_certificate
) -
A JKS- or PKCS12-formatted Keystore (see
ssl_keystore_path
)
Request Identity
editYou can configure this plugin to present authentication credentials to Elasticsearch in one of several ways:
-
ApiKey: (see
api_key
) -
Cloud Auth: (see
cloud_auth
) -
HTTP Basic Auth: (see
username
andpassword
)
Your request credentials are only as secure as the connection they are being passed over. They provide neither privacy nor secrecy on their own, and can easily be recovered by an adversary when SSL is disabled.
Minimum required privileges
editThis plugin communicates with Elasticsearch to resolve events into pipeline definitions and needs to be configured with credentials with appropriate privileges to read from the relevant APIs. At the startup phase, this plugin confirms that current user has sufficient privileges, including:
Privilege name | Description |
---|---|
|
A read-only privilege for cluster operations such as cluster health or state. Plugin requires it when checks Elasticsearch license. |
|
A read-only get and simulate access to ingest pipeline. It is required when plugin reads Elasticsearch ingest pipeline definitions. |
|
All operations on index templates privilege. It is required when plugin resolves default pipeline based on event data stream name. |
This plugin cannot determine if an anonymous user has the required privileges when it connects to an Elasticsearch cluster that has security features disabled or when the user does not provide credentials. The plugin starts in an unsafe mode with a runtime error indicating that API permissions are insufficient, and prevents events from being processed by the ingest pipeline.
To avoid these issues, set up user authentication and ensure that security in Elasticsearch is enabled (default).
Supported Ingest Processors
editThis filter can run Elasticsearch Ingest Node pipelines that are wholly comprised of the supported subset of processors. It has access to the Painless and Mustache scripting engines where applicable:
Source | Processor | Caveats |
---|---|---|
Ingest Common |
|
none |
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
resolved pipeline must be wholly-composed of supported processors |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
|
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
none |
|
|
side-loading a custom regex file is not supported; the processor will use the default user agent definitions as specified in Elasticsearch processor definition |
|
Redact |
|
none |
GeoIp |
|
requires MaxMind GeoIP2 databases, which may be provided by Logstash’s Geoip Database Management OR configured using |
Field Mappings
editDuring execution the Ingest pipeline works with a temporary mutable view of the Logstash event called an ingest document. This view contains all of the as-structured fields from the event with minimal type conversions.
It also contains additional metadata fields as required by ingest pipeline processors:
-
_version
: along
-value integer equivalent to the event’s@version
, or a sensible default value of1
. -
_ingest.timestamp
: aZonedDateTime
equivalent to the event’s@timestamp
field
After execution completes the event is sanitized to ensure that Logstash-reserved fields have the expected shape, providing sensible defaults for any missing required fields. When an ingest pipeline has set a reserved field to a value that cannot be coerced, the value is made available in an alternate location on the event as described below.
Logstash field | type | value |
---|---|---|
|
|
First coercible value of the ingest document’s |
|
String-encoded integer |
First coercible value of the ingest document’s |
|
key/value map |
The ingest document’s |
|
a String or a list of Strings |
The ingest document’s |
Additionally, these Elasticsearch IngestDocument Metadata fields are made available on the resulting event if-and-only-if they were set during pipeline execution:
Elasticsearch document metadata | Logstash field |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
Resolving Pipeline Definitions
editThis plugin uses Elasticsearch to resolve pipeline names into their pipeline definitions.
When configured without an explicit pipeline_name
, or when a pipeline uses the Reroute Processor, it also uses Elasticsearch to establish mappings of data stream names to their respective default pipeline names.
It uses hit/miss caches to avoid querying Elasticsearch for every single event. It also works to update these cached mappings before they expire. The result is that when Elasticsearch is responsive this plugin is able to pick up changes quickly without impacting its own performance, and it can survive periods of Elasticsearch issues without interruption by continuing to use potentially-stale mappings or definitions.
To achieve this, mappings are cached for a maximum of 24 hours, and cached values are reloaded every 1 minute with the following effect:
- when a reloaded mapping is non-empty and is the same as its already-cached value, its time-to-live is reset to ensure that subsequent events can continue using the confirmed-unchanged value
- when a reloaded mapping is non-empty and is different from its previously-cached value, the entry is updated so that subsequent events will use the new value
- when a reloaded mapping is newly empty, the previous non-empty mapping is replaced with a new empty entry so that subsequent events will use the empty value
- when the reload of a mapping fails, this plugin emits a log warning but the existing cache entry is unchanged and gets closer to its expiry.
Elastic Integration Filter Configuration Options
editThis plugin supports the following configuration options plus the Common options described later.
Setting | Input type | Required |
---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
string, one of |
No |
|
No |
api_key
edit- Value type is password
- There is no default value for this setting.
The encoded form of an API key that is used to authenticate this plugin to Elasticsearch.
cloud_auth
edit- Value type is password
- There is no default value for this setting.
Cloud authentication string ("<username>:<password>" format) is an alternative
for the username
/password
pair and can be obtained from Elastic Cloud web console.
cloud_id
edit- Value type is string
- There is no default value for this setting.
-
Cannot be combined with `
ssl_enabled
⇒false`.
Cloud Id, from the Elastic Cloud web console.
When connecting with a Cloud Id, communication to Elasticsearch is secured with SSL.
For more details, check out the Logstash-to-Cloud documentation.
geoip_database_directory
edit- Value type is path
- There is no default value for this setting.
When running in a Logstash process that has Geoip Database Management enabled, integrations that use the Geoip Processor wil use managed Maxmind databases by default. By using managed databases you accept and agree to the MaxMind EULA.
You may instead configure this plugin with the path to a local directory containing database files.
This plugin will discover all regular files with the .mmdb
suffix in the provided directory, and make each available by its file name to the GeoIp processors in integration pipelines.
It expects the files it finds to be in the MaxMind DB format with one of the following database types:
-
AnonymousIp
-
ASN
-
City
-
Country
-
ConnectionType
-
Domain
-
Enterprise
-
Isp
Most integrations rely on databases being present named exactly:
-
GeoLite2-ASN.mmdb
, -
GeoLite2-City.mmdb
, or -
GeoLite2-Country.mmdb
hosts
edit- Value type is a list of uris
- There is no default value for this setting.
-
Constraints:
- When any URL contains a protocol component, all URLs must have the same protocol as each other.
-
https
-protocol hosts use HTTPS and cannot be combined withssl_enabled => false
. -
http
-protocol hosts use unsecured HTTP and cannot be combined withssl_enabled => true
. -
When any URL omits a port component, the default
9200
is used. - When any URL contains a path component, all URLs must have the same path as each other.
A non-empty list of Elasticsearch hosts to connect.
Examples:
-
"127.0.0.1"
-
["127.0.0.1:9200","127.0.0.2:9200"]
-
["http://127.0.0.1"]
-
["https://127.0.0.1:9200"]
-
["https://127.0.0.1:9200/subpath"]
(If using a proxy on a subpath)
When connecting with a list of hosts, communication to Elasticsearch is secured with SSL unless configured otherwise.
Disabling SSL is dangerous
The security of this plugin relies on SSL to avoid leaking credentials and to avoid running illegitimate ingest pipeline definitions.
There are two ways to disable SSL:
-
Provide a list of
http
-protocol hosts -
Set
<<{version}-plugins-{type}s-{plugin}-ssl_enabled>>=>false
pipeline_name
edit- Value type is string
- There is no default value for this setting.
- When present, the event’s initial pipeline will not be auto-detected from the event’s data stream fields.
- Value may be a sprintf-style template; if any referenced fields cannot be resolved the event will not be routed to an ingest pipeline.
ssl_certificate
edit- Value type is path
- There is no default value for this setting.
-
When present,
ssl_key
andssl_key_passphrase
are also required. - Cannot be combined with configurations that disable SSL
Path to a PEM-encoded certificate or certificate chain with which to identify this plugin to Elasticsearch.
ssl_certificate_authorities
edit- Value type is a list of paths
- There is no default value for this setting.
- Cannot be combined with configurations that disable SSL
-
Cannot be combined with `
ssl_verification_mode
⇒none`.
One or more PEM-formatted files defining certificate authorities.
This setting can be used to override the system trust store for verifying the SSL certificate presented by Elasticsearch.
ssl_enabled
edit- Value type is boolean
- There is no default value for this setting.
Secure SSL communication to Elasticsearch is enabled unless:
-
it is explicitly disabled with
ssl_enabled => false
; OR -
it is implicitly disabled by providing
http
-protocolhosts
.
Specifying ssl_enabled => true
can be a helpful redundant safeguard to ensure this plugin cannot be configured to use non-ssl communication.
ssl_key
edit- Value type is path
- There is no default value for this setting.
-
Required when connection identity is configured with
ssl_certificate
- Cannot be combined with configurations that disable SSL
A path to a PKCS8-formatted SSL certificate key.
ssl_keystore_password
edit- Value type is password
- There is no default value for this setting.
-
Required when connection identity is configured with
ssl_keystore_path
- Cannot be combined with configurations that disable SSL
Password for the ssl_keystore_path
.
ssl_keystore_path
edit- Value type is path
- There is no default value for this setting.
-
When present,
ssl_keystore_password
is also required. - Cannot be combined with configurations that disable SSL
A path to a JKS- or PKCS12-formatted keystore with which to identify this plugin to Elasticsearch.
ssl_key_passphrase
edit- Value type is password
- There is no default value for this setting.
-
Required when connection identity is configured with
ssl_certificate
- Cannot be combined with configurations that disable SSL
A password or passphrase of the ssl_key
.
ssl_truststore_path
edit- Value type is path
- There is no default value for this setting.
-
When present,
ssl_truststore_password
is required. - Cannot be combined with configurations that disable SSL
-
Cannot be combined with `
ssl_verification_mode
⇒none`.
A path to JKS- or PKCS12-formatted keystore where trusted certificates are located.
This setting can be used to override the system trust store for verifying the SSL certificate presented by Elasticsearch.
ssl_truststore_password
edit- Value type is password
- There is no default value for this setting.
-
Required when connection trust is configured with
ssl_truststore_path
- Cannot be combined with configurations that disable SSL
Password for the ssl_truststore_path
.
ssl_verification_mode
edit- Value type is string
- There is no default value for this setting.
- Cannot be combined with configurations that disable SSL
Level of verification of the certificate provided by Elasticsearch.
SSL certificates presented by Elasticsearch are fully-validated by default.
-
Available modes:
-
none
: performs no validation, implicitly trusting any server that this plugin connects to (insecure) -
certificate
: validates the server-provided certificate is signed by a trusted certificate authority and that the server can prove possession of its associated private key (less secure) -
full
(default): performs the same validations ascertificate
and also verifies that the provided certificate has an identity claim matching the server we are attempting to connect to (most secure)
-
Common options
editThese configuration options are supported by all filter plugins:
Setting | Input type | Required |
---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
add_field
edit- Value type is hash
-
Default value is
{}
If this filter is successful, add any arbitrary fields to this event.
Field names can be dynamic and include parts of the event using the %{field}
.
Example:
filter { elastic_integration { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" } } }
# You can also add multiple fields at once: filter { elastic_integration { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" "new_field" => "new_static_value" } } }
If the event has field "somefield" == "hello"
this filter, on success,
would add field foo_hello
if it is present, with the
value above and the %{host}
piece replaced with that value from the
event. The second example would also add a hardcoded field.
add_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, add arbitrary tags to the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { elastic_integration { add_tag => [ "foo_%{somefield}" ] } }
# You can also add multiple tags at once: filter { elastic_integration { add_tag => [ "foo_%{somefield}", "taggedy_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would add a tag foo_hello
(and the second example would of course add a taggedy_tag
tag).
enable_metric
edit- Value type is boolean
-
Default value is
true
Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
id
edit- Value type is string
- There is no default value for this setting.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.
It is strongly recommended to set this ID in your configuration. This is particularly useful
when you have two or more plugins of the same type, for example, if you have 2 elastic_integration filters.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
filter { elastic_integration { id => "ABC" } }
periodic_flush
edit- Value type is boolean
-
Default value is
false
Call the filter flush method at regular interval. Optional.
remove_field
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the %{field} Example:
filter { elastic_integration { remove_field => [ "foo_%{somefield}" ] } }
# You can also remove multiple fields at once: filter { elastic_integration { remove_field => [ "foo_%{somefield}", "my_extraneous_field" ] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the field with name foo_hello
if it is present. The second
example would remove an additional, non-dynamic field.
remove_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary tags from the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter { elastic_integration { remove_tag => [ "foo_%{somefield}" ] } }
# You can also remove multiple tags at once: filter { elastic_integration { remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"] } }
If the event has field "somefield" == "hello"
this filter, on success,
would remove the tag foo_hello
if it is present. The second example
would remove a sad, unwanted tag as well.