Fleet and Elastic Agent 8.15.1

edit

Review important information about the Fleet and Elastic Agent 8.15.1 release.

Bug fixes

edit
Fleet
  • Remove duplicative retries from client-side requests to APIs that depend on EPR (#190722).
  • Add mappings for properties of nested objects that were previously omitted (#191730).
Elastic Agent
  • Fix the Debian packaging to properly copy the state.enc and state.yml files to the new version of the Elastic Agent. #5260 #5101
  • Switch from wall clock to montonic clocks for component check-in calculation. #5284 #5277
  • For a failed installation, return a nil error instead of syscall.Errno(0) which indicates a successful operation on Windows. #5317 #4496

Known issues

edit
Fleet configures additional properties in some nested objects in index templates of integrations.

Details

A bugfix intended to be released in 8.16.0 was also included in 8.15.1. It fixes an actual issue where some mappings were not being generated, but this also includes additional mappings when installing some integrations in 8.15.1 that were not included when using 8.15.0.

Impact

Users may notice that some index templates include additional mappings for the same package versions.

The memory usage of Beats based integrations is not correctly limited by the number of events actively in the memory queue, but rather the maximum size of the memory queue regardless of usage.

Details

In 8.15, events in the memory queue are not freed when they are acknowledged (as intended), but only when they are overwritten by later events in the queue buffer. This means for example if a configuration has a queue size of 5000, but the input data is low-volume and only 100 events are active at once, then the queue will gradually store more events until reaching 5000 in memory at once, then start replacing those with new events.

See Beats issue #40705.

Impact

Memory usage may be higher than in previous releases depending on the throughput of Elastic Agent. A fix is planned for 8.15.4.

  • The worst memory increase is for low-throughput configs with large queues.
  • For users whose queues were already sized proportionate to their throughput, memory use is increased but only marginally.
  • Affected users can mitigate the higher memory usage by lowering their queue size.