Troubleshooting Logstash
editTroubleshooting Logstash
editInstallation and setup
editInaccessible temp directory
editCertain versions of the JRuby runtime and libraries
in certain plugins (the Netty network library in the TCP input, for example) copy
executable files to the temp directory. This situation causes subsequent failures when
/tmp
is mounted noexec
.
Sample error
[2018-03-25T12:23:01,149][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: (LoadError) Could not load FFI Provider: (NotImplementedError) FFI not available: java.lang.UnsatisfiedLinkError: /tmp/jffi5534463206038012403.so: /tmp/jffi5534463206038012403.so: failed to map segment from shared object: Operation not permitted
Possible solutions
-
Change setting to mount
/tmp
withexec
. -
Specify an alternate directory using the
-Djava.io.tmpdir
setting in thejvm.options
file.
Logstash start up
editIllegal reflective access errors
editRunning Logstash with Java 11 results in warnings similar to these:
WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/{...}/jruby{...}jopenssl.jar) to field java.security.MessageDigest.provider WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release
These errors appear related to a known issue with JRuby.
Work around
Try adding these values to the jvm.options
file.
--add-opens=java.base/java.security=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio.channels=org.jruby.dist --add-opens=java.base/sun.nio.ch=org.jruby.dist --add-opens=java.management/sun.management=org.jruby.dist
Notes:
- These settings allow Logstash to start without warnings in Java 11, but they prevent Logstash from starting on Java 8.
- This workaround has been tested with simple pipelines. If you have experiences to share, please comment in the issue.
Troubleshooting persistent queues
editSymptoms of persistent queue problems include Logstash or one or more pipelines not starting successfully, accompanied by an error message similar to this one.
message=>"java.io.IOException: Page file size is too small to hold elements"
See the troubleshooting information in the persistent queue section for more information on remediating problems with persistent queues.
Data ingestion
editError response code 429
editA 429
message indicates that an application is busy handling other requests. For
example, Elasticsearch sends a 429
code to notify Logstash (or other indexers)
that the bulk failed because the ingest queue is full. Logstash will retry sending documents.
Possible actions
Check Elasticsearch to see if it needs attention.
Sample error
[2018-08-21T20:05:36,111][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$7@85be457 on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@538c9d8a[Running, pool size = 16, active threads = 16, queued tasks = 200, completed tasks = 685]]"})
Performance
editFor general performance tuning tips and guidelines, see Performance Tuning.
Troubleshooting a pipeline
editPipelines, by definition, are unique. Here are some guidelines to help you get started.
- Identify the offending pipeline.
- Start small. Create a minimum pipeline that manifests the problem.
For basic pipelines, this configuration could be enough to make the problem show itself.
input {stdin{}} output {stdout{}}
Logstash can separate logs by pipeline. This feature can help you identify the offending pipeline.
Set pipeline.separate_logs: true
in your logstash.yml
to enable the log per pipeline feature.
For more complex pipelines, the problem could be caused by a series of plugins in a specific order. Troubleshooting these pipelines usually requires trial and error. Start by systematically removing input and output plugins until you’re left with the minimum set that manifest the issue.
We want to expand this section to make it more helpful. If you have troubleshooting tips to share, please:
- create an issue at https://github.com/elastic/logstash/issues, or
- create a pull request with your proposed changes at https://github.com/elastic/logstash.
Logging level can affect performances
editSymptoms
Simple filters such as mutate
or json
filter can take several milliseconds per event to execute.
Inputs and outputs might be affected, too.
Background
The different plugins running on Logstash can be quite verbose if the logging level is set to debug
or trace
.
As the logging library used in Logstash is synchronous, heavy logging can affect performances.
Solution
Reset the logging level to info
.