- Fleet and Elastic Agent Guide: other versions:
- Fleet and Elastic Agent overview
- Beats and Elastic Agent capabilities
- Quick starts
- Migrate from Beats to Elastic Agent
- Deployment models
- Install Elastic Agents
- Install Fleet-managed Elastic Agents
- Install standalone Elastic Agents
- Install Elastic Agents in a containerized environment
- Run Elastic Agent in a container
- Run Elastic Agent on Kubernetes managed by Fleet
- Install Elastic Agent on Kubernetes using Helm
- Example: Install standalone Elastic Agent on Kubernetes using Helm
- Example: Install Fleet-managed Elastic Agent on Kubernetes using Helm
- Advanced Elastic Agent configuration managed by Fleet
- Configuring Kubernetes metadata enrichment on Elastic Agent
- Run Elastic Agent on GKE managed by Fleet
- Run Elastic Agent on Amazon EKS managed by Fleet
- Run Elastic Agent on Azure AKS managed by Fleet
- Run Elastic Agent Standalone on Kubernetes
- Scaling Elastic Agent on Kubernetes
- Using a custom ingest pipeline with the Kubernetes Integration
- Environment variables
- Run Elastic Agent as an OTel Collector
- Run Elastic Agent without administrative privileges
- Install Elastic Agent from an MSI package
- Installation layout
- Air-gapped environments
- Using a proxy server with Elastic Agent and Fleet
- Uninstall Elastic Agents from edge hosts
- Start and stop Elastic Agents on edge hosts
- Elastic Agent configuration encryption
- Secure connections
- Manage Elastic Agents in Fleet
- Configure standalone Elastic Agents
- Create a standalone Elastic Agent policy
- Structure of a config file
- Inputs
- Providers
- Outputs
- SSL/TLS
- Logging
- Feature flags
- Agent download
- Config file examples
- Grant standalone Elastic Agents access to Elasticsearch
- Example: Use standalone Elastic Agent with Elastic Cloud Serverless to monitor nginx
- Example: Use standalone Elastic Agent with Elasticsearch Service to monitor nginx
- Debug standalone Elastic Agents
- Kubernetes autodiscovery with Elastic Agent
- Monitoring
- Reference YAML
- Manage integrations
- Package signatures
- Add an integration to an Elastic Agent policy
- View integration policies
- Edit or delete an integration policy
- Install and uninstall integration assets
- View integration assets
- Set integration-level outputs
- Upgrade an integration
- Managed integrations content
- Best practices for integrations assets
- Data streams
- Define processors
- Processor syntax
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- parse_aws_vpc_flow_log
- rate_limit
- registered_domain
- rename
- replace
- script
- syslog
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Command reference
- Troubleshoot
- Release notes
Kafka output settings
editKafka output settings
editSpecify these settings to send data over a secure connection to Kafka. In the Fleet Output settings, make sure that the Kafka output type is selected.
If you plan to use Logstash to modify Elastic Agent output data before it’s sent to Kafka, please refer to our guidance for doing so, further in on this page.
General settings
edit
The Kafka protocol version that Elastic Agent will request when connecting.
Defaults to |
|
The addresses your Elastic Agents will use to connect to one or more Kafka brokers.
Use the format Examples:
Refer to the Fleet Server documentation for default ports and other configuration details. |
Authentication settings
editSelect the mechanism that Elastic Agent uses to authenticate with Kafka.
No authentication is used between Elastic Agent and Kafka. This is the default option. In production, it’s recommended to have an authentication method selected.
|
|
Connect to Kafka with a username and password. Provide your username and password, and select a SASL (Simple Authentication and Security Layer) mechanism for your login credentials. When SCRAM is enabled, Elastic Agent uses the SCRAM mechanism to authenticate the user credential. SCRAM is based on the IETF RFC5802 standard which describes a challenge-response mechanism for authenticating users.
To prevent unauthorized access your Kafka password is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the password as plain text in the agent policy definition. Secret storage requires Fleet Server version 8.12 or higher. Note that this setting can also be stored as a secret value or as plain text for preconfigured outputs. See Preconfiguration settings in the Kibana Guide to learn more. |
|
Authenticate using the Secure Sockets Layer (SSL) protocol. Provide the following details for your SSL certificate:
|
|
Server SSL certificate authorities |
The CA certificate to use to connect to Kafka. This is the CA used to generate the certificate and key for Kafka. Copy and paste in the full contents for the CA certificate. This setting is optional. This setting is not available when the authentication Click Add row to specify additional certificate authories. |
Verification mode |
Controls the verification of server certificates. Valid values are:
The default value is |
Partitioning settings
editThe number of partitions created is set automatically by the Kafka broker based on the list of topics. Records are then published to partitions either randomly, in round-robin order, or according to a calculated hash.
Publish records to Kafka output broker event partitions randomly. Specify the number of events to be published to the same partition before the partitioner selects a new partition. |
|
Publish records to Kafka output broker event partitions in a round-robin fashion. Specify the number of events to be published to the same partition before the partitioner selects a new partition. |
|
Publish records to Kafka output broker event partitions based on a hash computed from the specified list of fields. If a field is not specified, the Kafka event key value is used. |
Topics settings
editUse this option to set the Kafka topic for each Elastic Agent event.
Set a default topic to use for events sent by Elastic Agent to the Kafka output. You can set a static topic, for example
You can also set a custom field. This is useful if you’re using the |
Header settings
editA header is a key-value pair, and multiple headers can be included with the same key. Only string values are supported. These headers will be included in each produced Kafka message.
The key to set in the Kafka header. |
|
The value to set in the Kafka header. Click Add header to configure additional headers to be included in each Kafka message. |
|
The configurable ClientID used for logging, debugging, and auditing purposes. The default is |
Compression settings
editYou can enable compression to reduce the volume of Kafka output.
Select a compression codec to use. Supported codecs are |
|
For the Increasing the compression level reduces the network usage but increases the CPU usage. The default value is 4. |
Broker settings
editConfigure timeout and buffer size values for the Kafka brokers.
The maximum length of time a Kafka broker waits for the required number of ACKs before timing out (see the |
|
The maximum length of time that an Elastic Agent waits for a response from a Kafka broker before timing out. The default is 30 seconds. |
|
The ACK reliability level required from broker. Options are:
The default is Note that if ACK reliability is set to |
Other settings
edit
An optional formatted string specifying the Kafka event key. If configured, the event key can be extracted from the event using a format string. See the Kafka documentation for the implications of a particular choice of key; by default, the key is chosen by the Kafka cluster. |
|
Select a proxy URL for Elastic Agent to connect to Kafka. To learn about proxy configuration, refer to Using a proxy server with Elastic Agent and Fleet. |
|
YAML settings that will be added to the Kafka output section of each policy that uses this output. Make sure you specify valid YAML. The UI does not currently provide validation. See Advanced YAML configuration for descriptions of the available settings. |
|
When this setting is on, Elastic Agents use this output to send data if no other output is set in the agent policy. |
|
When this setting is on, Elastic Agents use this output to send agent monitoring data if no other output is set in the agent policy. |
Advanced YAML configuration
editSetting | Description |
---|---|
(string) The number of seconds to wait before trying to reconnect to Kafka
after a network error. After waiting Default: |
|
(string) The maximum number of seconds to wait before attempting to connect to Kafka after a network error. Default: |
|
(int) The maximum number of events to bulk in a single Kafka request. Default: |
|
(int) Duration to wait before sending bulk Kafka request. Default: |
|
(int) Per Kafka broker number of messages buffered in output pipeline. Default: |
|
(string) The configurable ClientID used for logging, debugging, and auditing purposes. Default: |
|
Output codec configuration. You can specify either the
Example configuration that uses the output.console: codec.json: pretty: true escape_html: false
Example configurable that uses the output.console: codec.format: string: '%{[@timestamp]} %{[message]}' Default: |
|
(string) The keep-alive period for an active network connection. If Default: |
|
(int) The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. This value should be equal to or less than the broker’s Default: |
|
Kafka metadata update settings. The metadata contains information about brokers, topics, partition, and active leaders to use for publishing.
|
|
The number of events the queue can store. This value should be evenly divisible by the smaller of Default: |
|
Default: |
|
(int) The maximum wait time for Default: |
Kafka output and using Logstash to index data to Elasticsearch
editIf you are considering using Logstash to ship the data from kafka
to Elasticsearch, please
be aware the structure of the documents sent from Elastic Agent to kafka
must not be modified by Logstash.
We suggest disabling ecs_compatibility
on both the kafka
input and the json
codec in order
to make sure the input doesn’t edit the fields and their contents.
The data streams setup by the integrations expect to receive events having the same structure and field names as they were sent directly from an Elastic Agent.
The structure of the documents sent from Elastic Agent to kafka
must not be modified by Logstash.
We suggest disabling ecs_compatibility
on both the kafka
input and the json
codec.
Refer to the Logstash output for Elastic Agent documentation for more details.
inputs { kafka { ... ecs_compatibility => "disabled" codec => json { ecs_compatibility => "disabled" } ... } } ...
On this page
ElasticON events are back!
Learn about the Elastic Search AI Platform from the experts at our live events.
Register now