Fleet and Elastic Agent 8.15.3

edit

Review important information about the Fleet and Elastic Agent 8.15.3 release.

Known issues

edit
The memory usage of Beats based integrations is not correctly limited by the number of events actively in the memory queue, but rather the maximum size of the memory queue regardless of usage.

Details

In 8.15, events in the memory queue are not freed when they are acknowledged (as intended), but only when they are overwritten by later events in the queue buffer. This means for example if a configuration has a queue size of 5000, but the input data is low-volume and only 100 events are active at once, then the queue will gradually store more events until reaching 5000 in memory at once, then start replacing those with new events.

See Beats issue #40705.

Impact

Memory usage may be higher than in previous releases depending on the throughput of Elastic Agent. A fix is planned for 8.15.4.

  • The worst memory increase is for low-throughput configs with large queues.
  • For users whose queues were already sized proportionate to their throughput, memory use is increased but only marginally.
  • Affected users can mitigate the higher memory usage by lowering their queue size.

Security updates

edit
Elastic Agent
  • Update Go version to 1.22.8. #5718

Enhancements

edit
Elastic Agent
  • Adjust the default memory requests and limits for Elastic Agent when it runs in a Kubernetes cluster. #5614 #5613 #4729
  • Use a metadata watcher for ReplicaSets in the K8s provider to collect only the name and OwnerReferences, which are used to connect Pods to Deployments and DaemonSets. #5699 #5623

Bug fixes

edit
Elastic Agent
  • Add pprof endpoints to the monitoring server if they’re enabled in the Elastic Agent configuration. #5562
  • Stop the elastic-agent inspect command from printing the output configuration twice. #5692 #4471