IMPORTANT: No additional bug fixes or documentation updates
will be released for this version. For the latest information, see the
current release documentation.
Logstash 6.4.0 Release Notes
editLogstash 6.4.0 Release Notes
editAttention users of Kafka Output in Logstash 6.4.0
If you are using Kafka output and have upgraded to Logstash 6.4.0, you will see pipeline startup errors:
Pipeline aborted due to error {:pipeline_id=>"pipeline1", :exception=>org.apache.kafka.common.config.ConfigException: Invalid value 32768 for configuration receive.buffer.bytes: Expected value to be a 32-bit integer, but it was a java.lang.Long
This error was due to an incorrectly configured default value for the
receive_buffer_bytes
option (fixed in PR
logstash-output-kafka #205),
and false negative results on our CI due to incorrect exit code
handling (fixed in
logstash-output-kafka#204).
Kafka output plugin version 7.1.3 has been released. You can upgrade using:
bin/logstash-plugin update logstash-output-kafka
This version will be included in the next 6.4.1 patch release.
- Adds the Azure Module for integrating Azure activity logs and SQL diagnostic logs with the Elastic Stack.
- Adds the azure_event_hubs input plugin as a default plugin.
- Adds support for port customization in cloud id (#9877).
- Adds opt-in strict-mode for field reference (#9591).
- Adds syntax highlighting for expressions in Grok Debugger Kibana#18572
- Changes pipeline viewer visualization to use more tree like layout to express structure of pipeline configuration Kibana#18597
- Fixes incorrect pipeline shutdown logging (#9688).
- Fixes incorrect type handling between Java pipeline and Ruby pipeline (#9671).
- Fixes possible where Ensure separate output streams to avoid keystore corruption issue by ensuring separate output streams (#9582).
- Javafication to continue moving parts of Logstash core from Ruby to Java and some general code cleanup (#9414, #9415, #9416, #9422, #9482, #9486, #9489, #9490, #9491, #9496, #9520, #9587, #9574, #9610, #9620, #9631, #9632, #9633, #9661, #9662, #9665, #9667, #9668, #9670, #9676, #9687, #9693, #9697, #9699, #9717, #9723, #9731, #9740, #9742, #9743, #9751, #9752, #9765).
Plugins
editRubydebug Codec
-
Fixes crash that could occur on startup if
$HOME
was unset or if${HOME}/.aprc
was unreadable by pinning awesome_print dependency to a release before the bug was introduced. #5
Fingerprint Filter
- Adds support for non-keyed, regular hash functions. #18
KV Filter
Azure Event Hubs Input
- Initial version of the azure_event_hubs input plugin, which supersedes logstash-input-azureeventhub.
Beats Input
-
Adds
add_hostname
flag to enable/disable the population of thehost
field from the beats.hostname. field #340 - Fixes handling of batches where the sequence numbers do not start with 1. #342
- Changes project to use gradle version 4.8.1. #334
-
Adds
ssl_peer_metadata
option. #327 -
Fixes
ssl_verify_mode => peer
. #326
Exec Input
- Fixes issue where certain log entries were incorrectly writing jdbc input instead of exec input. #21
File Input
-
Adds new feature:
mode
setting. Introduces two modes,tail
mode is the existing behaviour for tailing,read
mode is new behaviour that is optimized for the read complete content scenario. Please read the docs to fully appreciate the benefits ofread
mode. -
Adds new feature: File completion actions. Settings
file_completed_action
andfile_completed_log_path
control what actions to do after a file is completely read. Applicable:read
mode only. -
Adds new feature: in
read
mode, compressed files can be processed, GZIP only. -
Adds new feature: Files are sorted after being discovered. Settings
file_sort_by
andfile_sort_direction
control the sort order. Applicable: any mode. -
Adds new feature: Banded or striped file processing. Settings:
file_chunk_size
andfile_chunk_count
control banded or striped processing. Applicable: any mode. -
Adds new feature:
sincedb_clean_after
setting. Introduces expiry of sincedb records. The default is 14 days. If, aftersincedb_clean_after
days, no activity has been detected on a file (inode) the record expires and is not written to disk. The persisted record now includes the "last activity seen" timestamp. Applicable: any mode. - Moves Filewatch code into the plugin folder, rework Filewatch code to use Logstash facilities like logging and environment.
- Adds much better support for file rotation schemes of copy/truncate and rename cascading. Applies to tail mode only.
- Adds support for processing files over remote mounts e.g. NFS. Before, it was possible to read into memory allocated but not filled with data resulting in ASCII NUL (0) bytes in the message field. Now, files are read up to the size as given by the remote filesystem client. Applies to tail and read modes.
-
Fixes
read
mode of regular files sincedb write is requested in each read loop iteration rather than waiting for the end-of-file to be reached. Note: for gz files, the sincedb entry can only be updated at the end of the file as it is not possible to seek into a compressed file and begin reading from that position. #196 -
Adds support for String Durations in some settings e.g.
stat_interval => "750 ms"
. #194 -
Fixes
require winhelper
error in WINDOWS. #184 - Fixes issue, where when no delimiter is found in a chunk, the chunk is reread - no forward progress is made in the file. #185
- Fixes JAR_VERSION read problem, prevented Logstash from starting. #180
- Fixes sincedb write error when using /dev/null, repeatedly causes a plugin restart. #182
- Fixes a regression where files discovered after first discovery were not always read from the beginning. Applies to tail mode only. #198
Http Input
- Replaces Puma web server with Netty. #73
-
Adds
request_headers_target_field
andremote_host_target_field
configuration options with default to host and headers respectively. #68 - Sanitizes content-type header with getMimeType. #87
- Moves most message handling code to Java. #85
- Fixes issue to respond with correct http protocol version. #84
- Adds support for crt/key certificates.
- Deprecates jks support.
Jdbc Input
- Fixes crash that occurs when receiving string input that cannot be coerced to UTF-8 (such as BLOB data). #291
S3 Input
-
Adds ability to optionally include S3 object properties inside
@metadata
. #155
Kafka Output
- Fixes handling of two settings that weren’t wired to the kafka client. #198