Functionbeat reached End of Support on October 18, 2023. You must consider
moving your deployments to the more versatile and efficient Elastic Serverless
Forwarder.
functionbeat.reference.yml
editfunctionbeat.reference.yml
editThe following reference file is available with your Functionbeat installation. It
shows all non-deprecated Functionbeat options. You can copy from this file and paste
configurations into the functionbeat.yml
file to customize it.
The reference file is located in the same directory as the
functionbeat.yml
file. To locate the file, see Directory layout.
The contents of the file are included here for your convenience.
########################## Functionbeat Configuration ########################### # This file is a full configuration example documenting all non-deprecated # options in comments. For a shorter configuration example, that contains only # the most common options, please see functionbeat.yml in the same directory. # # You can find the full configuration reference here: # https://www.elastic.co/guide/en/beats/functionbeat/index.html # ================================== Provider ================================== # Configure functions to run on AWS Lambda, currently, we assume that the credentials # are present in the environment to correctly create the function when using the CLI. # # Configure which S3 endpoint should we use. functionbeat.provider.aws.endpoint: "s3.amazonaws.com" # Configure which S3 bucket we should upload the lambda artifact. functionbeat.provider.aws.deploy_bucket: "functionbeat-deploy" # Configure credentials of Functionbeat while deploying to AWS. # Available options: # * access_key_id, secret_access_key and/or session_token #functionbeat.provider.aws.access_key_id: '${AWS_ACCESS_KEY_ID:""}' #functionbeat.provider.aws.secret_access_key: '${AWS_SECRET_ACCESS_KEY:""}' #functionbeat.provider.aws.session_token: '${AWS_SESSION_TOKEN:""}' # * role_arn #functionbeat.provider.aws.role_arn: arn:aws:iam::123456789012:role/test-fnb # * credential_profile_name and/or shared_credential_file #functionbeat.provider.aws.credential_profile_name: fnb-aws #functionbeat.provider.aws.shared_credential_file: /etc/functionbeat/aws_credentials functionbeat.provider.aws.functions: # Define the list of functions available, each function is required to have a unique name. # Create a function that accepts events coming from cloudwatchlogs. - name: cloudwatch enabled: false type: cloudwatch_logs # Description of the method to help identify them when you run multiple functions. description: "lambda function for cloudwatch logs" # Concurrency, is the reserved number of instances for that function. # Default is 5. # # Note: There is a hard limit of 1000 functions of any kind per account. #concurrency: 5 # The maximum memory allocated for this function, the configured size must be a factor of 64. # There is a hard limit of 3008MiB for each function. Default is 128MiB. #memory_size: 128MiB # The amount of time the function is allowed to run. #timeout: 3s # Execution role of the function. #role: arn:aws:iam::123456789012:role/MyFunction # Connect to private resources in an Amazon VPC. #virtual_private_cloud: # security_group_ids: [] # subnet_ids: [] # Dead letter queue configuration, this must be set to an ARN pointing to an SQS queue. #dead_letter_config.target_arn: # Tags are key-value pairs attached to the function. #tags: # department: ops # Optional fields that you can specify to add additional information to the # output. Fields can be scalar values, arrays, dictionaries, or any nested # combination of these. #fields: # env: staging # List of cloudwatch log group registered to that function. triggers: - log_group_name: /aws/lambda/functionbeat-cloudwatch #filter_pattern: mylog_ # Define custom processors for this function. #processors: # - dissect: # tokenizer: "%{key1} %{key2}" # Set to true to publish fields with null values in events. #keep_null: false # Create a function that accepts events from SQS queues. - name: sqs enabled: false type: sqs # Description of the method to help identify them when you run multiple functions. description: "lambda function for SQS events" # Concurrency, is the reserved number of instances for that function. # Default is 5. # # Note: There is a hard limit of 1000 functions of any kind per account. #concurrency: 5 # The maximum memory allocated for this function, the configured size must be a factor of 64. # There is a hard limit of 3008MiB for each function. Default is 128MiB. #memory_size: 128MiB # The amount of time the function is allowed to run. #timeout: 3s # Execution role of the function. #role: arn:aws:iam::123456789012:role/MyFunction # Connect to private resources in an Amazon VPC. #virtual_private_cloud: # security_group_ids: [] # subnet_ids: [] # Dead letter queue configuration, this must be set to an ARN pointing to an SQS queue. #dead_letter_config.target_arn: # Tags are key-value pairs attached to the function. #tags: # department: ops # Optional fields that you can specify to add additional information to the # output. Fields can be scalar values, arrays, dictionaries, or any nested # combination of these. #fields: # env: staging # List of SQS queues. triggers: # Arn for the SQS queue. - event_source_arn: arn:aws:sqs:us-east-1:xxxxx:myevents # Define custom processors for this function. #processors: # - decode_json_fields: # fields: ["message"] # process_array: false # max_depth: 1 # target: "" # overwrite_keys: false # # Set to true to publish fields with null values in events. #keep_null: false # Create a function that accepts events from Kinesis streams. - name: kinesis enabled: false type: kinesis # Description of the method to help identify them when you run multiple functions. description: "lambda function for Kinesis events" # Concurrency, is the reserved number of instances for that function. # Default is 5. # # Note: There is a hard limit of 1000 functions of any kind per account. #concurrency: 5 # The maximum memory allocated for this function, the configured size must be a factor of 64. # There is a hard limit of 3008MiB for each function. Default is 128MiB. #memory_size: 128MiB # The amount of time the function is allowed to run. #timeout: 3s # Execution role of the function. #role: arn:aws:iam::123456789012:role/MyFunction # Connect to private resources in an Amazon VPC. #virtual_private_cloud: # security_group_ids: [] # subnet_ids: [] # Dead letter queue configuration, this must be set to an ARN pointing to an SQS queue. #dead_letter_config.target_arn: # Tags are key-value pairs attached to the function. #tags: # department: ops # Optional fields that you can specify to add additional information to the # output. Fields can be scalar values, arrays, dictionaries, or any nested # combination of these. #fields: # env: staging # Define custom processors for this function. #processors: # This example extracts the raw data from events. # - decode_base64_field: # field: # from: message # to: message # - decompress_gzip_field: # field: # from: message # to: message # - decode_json_fields: # fields: ["message"] # process_array: false # max_depth: 1 # target: "" # overwrite_keys: false # List of Kinesis streams. triggers: # Arn for the Kinesis stream. - event_source_arn: arn:aws:kinesis:us-east-1:xxxxx:myevents # batch_size is the number of events read in a batch. # Default is 10. #batch_size: 100 # Starting position is where to start reading events from the Kinesis stream. # Default is trim_horizon. #starting_position: "trim_horizon" # parallelization_factor is the number of batches to process from each shard concurrently. # Default is 1. #parallelization_factor: 1 # Set to true to publish fields with null values in events. #keep_null: false # Create a function that accepts Cloudwatch logs from Kinesis streams. - name: cloudwatch-logs-kinesis enabled: false type: cloudwatch_logs_kinesis # Description of the method to help identify them when you run multiple functions. description: "lambda function for Cloudwatch logs in Kinesis events" # Set base64_encoded if your data is base64 encoded. #base64_encoded: false # Set compressed if your data is compressed with gzip. #compressed: true # Concurrency, is the reserved number of instances for that function. # Default is 5. # # Note: There is a hard limit of 1000 functions of any kind per account. #concurrency: 5 # The maximum memory allocated for this function, the configured size must be a factor of 64. # There is a hard limit of 3008MiB for each function. Default is 128MiB. #memory_size: 128MiB # Dead letter queue configuration, this must be set to an ARN pointing to an SQS queue. #dead_letter_config.target_arn: # Tags are key-value pairs attached to the function. #tags: # department: ops # The amount of time the function is allowed to run. #timeout: 3s # Execution role of the function. #role: arn:aws:iam::123456789012:role/MyFunction # Connect to private resources in an Amazon VPC. #virtual_private_cloud: # security_group_ids: [] # subnet_ids: [] # # Define custom processors for this function. #processors: # - decode_json_fields: # fields: ["message"] # process_array: false # max_depth: 1 # target: "" # overwrite_keys: false # List of Kinesis streams. triggers: # Arn for the Kinesis stream. - event_source_arn: arn:aws:kinesis:us-east-1:xxxxx:myevents # batch_size is the number of events read in a batch. # Default is 10. #batch_size: 100 # Starting position is where to start reading events from the Kinesis stream. # Default is trim_horizon. #starting_position: "trim_horizon" # parallelization_factor is the number of batches to process from each shard concurrently. # Default is 1. #parallelization_factor: 1 # Set to true to publish fields with null values in events. #keep_null: false # ================================== General =================================== # The name of the shipper that publishes the network data. It can be used to group # all the transactions sent by a single shipper in the web interface. # If this option is not defined, the hostname is used. #name: # The tags of the shipper are included in their field with each # transaction published. Tags make it easy to group servers by different # logical properties. #tags: ["service-X", "web-tier"] # Optional fields that you can specify to add additional information to the # output. Fields can be scalar values, arrays, dictionaries, or any nested # combination of these. #fields: # env: staging # If this option is set to true, the custom fields are stored as top-level # fields in the output document instead of being grouped under a field # sub-dictionary. Default is false. #fields_under_root: false # Configure the precision of all timestamps in Functionbeat. # Available options: millisecond, microsecond, nanosecond #timestamp.precision: millisecond # Internal queue configuration for buffering events to be published. # Queue settings may be overridden by performance presets in the # Elasticsearch output. To configure them manually use "preset: custom". #queue: # Queue type by name (default 'mem') # The memory queue will present all available events (up to the outputs # bulk_max_size) to the output, the moment the output is ready to serve # another batch of events. #mem: # Max number of events the queue can buffer. #events: 3200 # Hints the minimum number of events stored in the queue, # before providing a batch of events to the outputs. # The default value is set to 2048. # A value of 0 ensures events are immediately available # to be sent to the outputs. #flush.min_events: 1600 # Maximum duration after which events are available to the outputs, # if the number of events stored in the queue is < `flush.min_events`. #flush.timeout: 10s # The disk queue stores incoming events on disk until the output is # ready for them. This allows a higher event limit than the memory-only # queue and lets pending events persist through a restart. #disk: # The directory path to store the queue's data. #path: "${path.data}/diskqueue" # The maximum space the queue should occupy on disk. Depending on # input settings, events that exceed this limit are delayed or discarded. #max_size: 10GB # The maximum size of a single queue data file. Data in the queue is # stored in smaller segments that are deleted after all their events # have been processed. #segment_size: 1GB # The number of events to read from disk to memory while waiting for # the output to request them. #read_ahead: 512 # The number of events to accept from inputs while waiting for them # to be written to disk. If event data arrives faster than it # can be written to disk, this setting prevents it from overflowing # main memory. #write_ahead: 2048 # The duration to wait before retrying when the queue encounters a disk # write error. #retry_interval: 1s # The maximum length of time to wait before retrying on a disk write # error. If the queue encounters repeated errors, it will double the # length of its retry interval each time, up to this maximum. #max_retry_interval: 30s # Sets the maximum number of CPUs that can be executed simultaneously. The # default is the number of logical CPUs available in the system. #max_procs: # ================================= Processors ================================= # Processors are used to reduce the number of fields in the exported event or to # enhance the event with external metadata. This section defines a list of # processors that are applied one by one and the first one receives the initial # event: # # event -> filter1 -> event1 -> filter2 ->event2 ... # # The supported processors are drop_fields, drop_event, include_fields, # decode_json_fields, and add_cloud_metadata. # # For example, you can use the following processors to keep the fields that # contain CPU load percentages, but remove the fields that contain CPU ticks # values: # #processors: # - include_fields: # fields: ["cpu"] # - drop_fields: # fields: ["cpu.user", "cpu.system"] # # The following example drops the events that have the HTTP response code 200: # #processors: # - drop_event: # when: # equals: # http.code: 200 # # The following example renames the field a to b: # #processors: # - rename: # fields: # - from: "a" # to: "b" # # The following example tokenizes the string into fields: # #processors: # - dissect: # tokenizer: "%{key1} - %{key2}" # field: "message" # target_prefix: "dissect" # # The following example enriches each event with metadata from the cloud # provider about the host machine. It works on EC2, GCE, DigitalOcean, # Tencent Cloud, and Alibaba Cloud. # #processors: # - add_cloud_metadata: ~ # # The following example enriches each event with the machine's local time zone # offset from UTC. # #processors: # - add_locale: # format: offset # # The following example enriches each event with docker metadata, it matches # given fields to an existing container id and adds info from that container: # #processors: # - add_docker_metadata: # host: "unix:///var/run/docker.sock" # match_fields: ["system.process.cgroup.id"] # match_pids: ["process.pid", "process.parent.pid"] # match_source: true # match_source_index: 4 # match_short_id: false # cleanup_timeout: 60 # labels.dedot: false # # To connect to Docker over TLS you must specify a client and CA certificate. # #ssl: # # certificate_authority: "/etc/pki/root/ca.pem" # # certificate: "/etc/pki/client/cert.pem" # # key: "/etc/pki/client/cert.key" # # The following example enriches each event with docker metadata, it matches # container id from log path available in `source` field (by default it expects # it to be /var/lib/docker/containers/*/*.log). # #processors: # - add_docker_metadata: ~ # # The following example enriches each event with host metadata. # #processors: # - add_host_metadata: ~ # # The following example enriches each event with process metadata using # process IDs included in the event. # #processors: # - add_process_metadata: # match_pids: ["system.process.ppid"] # target: system.process.parent # # The following example decodes fields containing JSON strings # and replaces the strings with valid JSON objects. # #processors: # - decode_json_fields: # fields: ["field1", "field2", ...] # process_array: false # max_depth: 1 # target: "" # overwrite_keys: false # #processors: # - decompress_gzip_field: # from: "field1" # to: "field2" # ignore_missing: false # fail_on_error: true # # The following example copies the value of the message to message_copied # #processors: # - copy_fields: # fields: # - from: message # to: message_copied # fail_on_error: true # ignore_missing: false # # The following example truncates the value of the message to 1024 bytes # #processors: # - truncate_fields: # fields: # - message # max_bytes: 1024 # fail_on_error: false # ignore_missing: true # # The following example preserves the raw message under event.original # #processors: # - copy_fields: # fields: # - from: message # to: event.original # fail_on_error: false # ignore_missing: true # - truncate_fields: # fields: # - event.original # max_bytes: 1024 # fail_on_error: false # ignore_missing: true # # The following example URL-decodes the value of field1 to field2 # #processors: # - urldecode: # fields: # - from: "field1" # to: "field2" # ignore_missing: false # fail_on_error: true # =============================== Elastic Cloud ================================ # These settings simplify using Functionbeat with the Elastic Cloud (https://cloud.elastic.co/). # The cloud.id setting overwrites the `output.elasticsearch.hosts` and # `setup.kibana.host` options. # You can find the `cloud.id` in the Elastic Cloud web UI. #cloud.id: # The cloud.auth setting overwrites the `output.elasticsearch.username` and # `output.elasticsearch.password` settings. The format is `<user>:<pass>`. #cloud.auth: # ================================== Outputs =================================== # Configure what output to use when sending the data collected by the beat. # ---------------------------- Elasticsearch Output ---------------------------- output.elasticsearch: # Boolean flag to enable or disable the output module. #enabled: true # Array of hosts to connect to. # Scheme and port can be left out and will be set to the default (http and 9200) # In case you specify and additional path, the scheme is required: http://localhost:9200/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 hosts: ["localhost:9200"] # Performance presets configure other output fields to recommended values # based on a performance priority. # Options are "balanced", "throughput", "scale", "latency" and "custom". # Default if unspecified: "custom" preset: balanced # Set gzip compression level. Set to 0 to disable compression. # This field may conflict with performance presets. To set it # manually use "preset: custom". # The default is 1. #compression_level: 1 # Configure escaping HTML symbols in strings. #escape_html: false # Protocol - either `http` (default) or `https`. #protocol: "https" # Authentication credentials - either API key or username/password. #api_key: "id:api_key" #username: "elastic" #password: "changeme" # Dictionary of HTTP parameters to pass within the URL with index operations. #parameters: #param1: value1 #param2: value2 # Number of workers per Elasticsearch host. # This field may conflict with performance presets. To set it # manually use "preset: custom". #worker: 1 # If set to true and multiple hosts are configured, the output plugin load # balances published events onto all Elasticsearch hosts. If set to false, # the output plugin sends all events to only one host (determined at random) # and will switch to another host if the currently selected one becomes # unreachable. The default value is true. #loadbalance: true # Optional data stream or index name. The default is "functionbeat-%{[agent.version]}". # In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly. #index: "functionbeat-%{[agent.version]}" # Optional ingest pipeline. By default, no pipeline will be used. #pipeline: "" # Optional HTTP path #path: "/elasticsearch" # Custom HTTP headers to add to each request #headers: # X-My-Header: Contents of the header # Proxy server URL #proxy_url: http://proxy:3128 # Whether to disable proxy settings for outgoing connections. If true, this # takes precedence over both the proxy_url field and any environment settings # (HTTP_PROXY, HTTPS_PROXY). The default is false. #proxy_disable: false # The number of times a particular Elasticsearch index operation is attempted. If # the indexing operation doesn't succeed after this many retries, the events are # dropped. The default is 3. #max_retries: 3 # The maximum number of events to bulk in a single Elasticsearch bulk API index request. # This field may conflict with performance presets. To set it # manually use "preset: custom". # The default is 1600. #bulk_max_size: 1600 # The number of seconds to wait before trying to reconnect to Elasticsearch # after a network error. After waiting backoff.init seconds, the Beat # tries to reconnect. If the attempt fails, the backoff timer is increased # exponentially up to backoff.max. After a successful connection, the backoff # timer is reset. The default is 1s. #backoff.init: 1s # The maximum number of seconds to wait before attempting to connect to # Elasticsearch after a network error. The default is 60s. #backoff.max: 60s # The maximum amount of time an idle connection will remain idle # before closing itself. Zero means use the default of 60s. The # format is a Go language duration (example 60s is 60 seconds). # This field may conflict with performance presets. To set it # manually use "preset: custom". # The default is 3s. # idle_connection_timeout: 3s # Configure HTTP request timeout before failing a request to Elasticsearch. #timeout: 90 # Prevents functionbeat from connecting to older Elasticsearch versions when set to `false` #allow_older_versions: true # Use SSL settings for HTTPS. #ssl.enabled: true # Controls the verification of certificates. Valid values are: # * full, which verifies that the provided certificate is signed by a trusted # authority (CA) and also verifies that the server's hostname (or IP address) # matches the names identified within the certificate. # * strict, which verifies that the provided certificate is signed by a trusted # authority (CA) and also verifies that the server's hostname (or IP address) # matches the names identified within the certificate. If the Subject Alternative # Name is empty, it returns an error. # * certificate, which verifies that the provided certificate is signed by a # trusted authority (CA), but does not perform any hostname verification. # * none, which performs no verification of the server's certificate. This # mode disables many of the security benefits of SSL/TLS and should only be used # after very careful consideration. It is primarily intended as a temporary # diagnostic mechanism when attempting to resolve TLS errors; its use in # production environments is strongly discouraged. # The default value is full. #ssl.verification_mode: full # List of supported/valid TLS versions. By default all TLS versions from 1.1 # up to 1.3 are enabled. #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client certificate key #ssl.key: "/etc/pki/client/cert.key" # Optional passphrase for decrypting the certificate key. #ssl.key_passphrase: '' # Configure cipher suites to be used for SSL connections #ssl.cipher_suites: [] # Configure curve types for ECDHE-based cipher suites #ssl.curve_types: [] # Configure what types of renegotiation are supported. Valid options are # never, once, and freely. Default is never. #ssl.renegotiation: never # Configure a pin that can be used to do extra validation of the verified certificate chain, # this allow you to ensure that a specific certificate is used to validate the chain of trust. # # The pin is a base64 encoded string of the SHA-256 fingerprint. #ssl.ca_sha256: "" # A root CA HEX encoded fingerprint. During the SSL handshake if the # fingerprint matches the root CA certificate, it will be added to # the provided list of root CAs (`certificate_authorities`), if the # list is empty or not defined, the matching certificate will be the # only one in the list. Then the normal SSL validation happens. #ssl.ca_trusted_fingerprint: "" # Enables restarting functionbeat if any file listed by `key`, # `certificate`, or `certificate_authorities` is modified. # This feature IS NOT supported on Windows. #ssl.restart_on_cert_change.enabled: false # Period to scan for changes on CA certificate files #ssl.restart_on_cert_change.period: 1m # Enable Kerberos support. Kerberos is automatically enabled if any Kerberos setting is set. #kerberos.enabled: true # Authentication type to use with Kerberos. Available options: keytab, password. #kerberos.auth_type: password # Path to the keytab file. It is used when auth_type is set to keytab. #kerberos.keytab: /etc/elastic.keytab # Path to the Kerberos configuration. #kerberos.config_path: /etc/krb5.conf # Name of the Kerberos user. #kerberos.username: elastic # Password of the Kerberos user. It is used when auth_type is set to password. #kerberos.password: changeme # Kerberos realm. #kerberos.realm: ELASTIC # ------------------------------ Logstash Output ------------------------------- #output.logstash: # Boolean flag to enable or disable the output module. #enabled: true # The Logstash hosts #hosts: ["localhost:5044"] # Number of workers per Logstash host. #worker: 1 # Set gzip compression level. #compression_level: 3 # Configure escaping HTML symbols in strings. #escape_html: false # Optional maximum time to live for a connection to Logstash, after which the # connection will be re-established. A value of `0s` (the default) will # disable this feature. # # Not yet supported for async connections (i.e. with the "pipelining" option set) #ttl: 30s # Optionally load-balance events between Logstash hosts. Default is false. #loadbalance: false # Number of batches to be sent asynchronously to Logstash while processing # new batches. #pipelining: 2 # If enabled only a subset of events in a batch of events is transferred per # transaction. The number of events to be sent increases up to `bulk_max_size` # if no error is encountered. #slow_start: false # The number of seconds to wait before trying to reconnect to Logstash # after a network error. After waiting backoff.init seconds, the Beat # tries to reconnect. If the attempt fails, the backoff timer is increased # exponentially up to backoff.max. After a successful connection, the backoff # timer is reset. The default is 1s. #backoff.init: 1s # The maximum number of seconds to wait before attempting to connect to # Logstash after a network error. The default is 60s. #backoff.max: 60s # Optional index name. The default index name is set to functionbeat # in all lowercase. #index: 'functionbeat' # SOCKS5 proxy server URL #proxy_url: socks5://user:password@socks5-server:2233 # Resolve names locally when using a proxy server. Defaults to false. #proxy_use_local_resolver: false # Use SSL settings for HTTPS. #ssl.enabled: true # Controls the verification of certificates. Valid values are: # * full, which verifies that the provided certificate is signed by a trusted # authority (CA) and also verifies that the server's hostname (or IP address) # matches the names identified within the certificate. # * strict, which verifies that the provided certificate is signed by a trusted # authority (CA) and also verifies that the server's hostname (or IP address) # matches the names identified within the certificate. If the Subject Alternative # Name is empty, it returns an error. # * certificate, which verifies that the provided certificate is signed by a # trusted authority (CA), but does not perform any hostname verification. # * none, which performs no verification of the server's certificate. This # mode disables many of the security benefits of SSL/TLS and should only be used # after very careful consideration. It is primarily intended as a temporary # diagnostic mechanism when attempting to resolve TLS errors; its use in # production environments is strongly discouraged. # The default value is full. #ssl.verification_mode: full # List of supported/valid TLS versions. By default all TLS versions from 1.1 # up to 1.3 are enabled. #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client certificate key #ssl.key: "/etc/pki/client/cert.key" # Optional passphrase for decrypting the certificate key. #ssl.key_passphrase: '' # Configure cipher suites to be used for SSL connections #ssl.cipher_suites: [] # Configure curve types for ECDHE-based cipher suites #ssl.curve_types: [] # Configure what types of renegotiation are supported. Valid options are # never, once, and freely. Default is never. #ssl.renegotiation: never # Configure a pin that can be used to do extra validation of the verified certificate chain, # this allow you to ensure that a specific certificate is used to validate the chain of trust. # # The pin is a base64 encoded string of the SHA-256 fingerprint. #ssl.ca_sha256: "" # A root CA HEX encoded fingerprint. During the SSL handshake if the # fingerprint matches the root CA certificate, it will be added to # the provided list of root CAs (`certificate_authorities`), if the # list is empty or not defined, the matching certificate will be the # only one in the list. Then the normal SSL validation happens. #ssl.ca_trusted_fingerprint: "" # Enables restarting functionbeat if any file listed by `key`, # `certificate`, or `certificate_authorities` is modified. # This feature IS NOT supported on Windows. #ssl.restart_on_cert_change.enabled: false # Period to scan for changes on CA certificate files #ssl.restart_on_cert_change.period: 1m # The number of times to retry publishing an event after a publishing failure. # After the specified number of retries, the events are typically dropped. # Some Beats, such as Filebeat and Winlogbeat, ignore the max_retries setting # and retry until all events are published. Set max_retries to a value less # than 0 to retry until all events are published. The default is 3. #max_retries: 3 # The maximum number of events to bulk in a single Logstash request. The # default is 2048. #bulk_max_size: 2048 # The number of seconds to wait for responses from the Logstash server before # timing out. The default is 30s. #timeout: 30s # ------------------------------- Console Output ------------------------------- #output.console: # Boolean flag to enable or disable the output module. #enabled: true # Configure JSON encoding #codec.json: # Pretty-print JSON event #pretty: false # Configure escaping HTML symbols in strings. #escape_html: false # =================================== Paths ==================================== # The home path for the Functionbeat installation. This is the default base path # for all other path settings and for miscellaneous files that come with the # distribution (for example, the sample dashboards). # If not set by a CLI flag or in the configuration file, the default for the # home path is the location of the binary. #path.home: # The configuration path for the Functionbeat installation. This is the default # base path for configuration files, including the main YAML configuration file # and the Elasticsearch template file. If not set by a CLI flag or in the # configuration file, the default for the configuration path is the home path. #path.config: ${path.home} # The data path for the Functionbeat installation. This is the default base path # for all the files in which Functionbeat needs to store its data. If not set by a # CLI flag or in the configuration file, the default for the data path is a data # subdirectory inside the home path. #path.data: ${path.home}/data # The logs path for a Functionbeat installation. This is the default location for # the Beat's log files. If not set by a CLI flag or in the configuration file, # the default for the logs path is a logs subdirectory inside the home path. #path.logs: ${path.home}/logs # ================================== Keystore ================================== # Location of the Keystore containing the keys and their sensitive values. #keystore.path: "${path.config}/beats.keystore" # ================================= Dashboards ================================= # These settings control loading the sample dashboards to the Kibana index. Loading # the dashboards are disabled by default and can be enabled either by setting the # options here or by using the `-setup` CLI flag or the `setup` command. #setup.dashboards.enabled: false # The directory from where to read the dashboards. The default is the `kibana` # folder in the home path. #setup.dashboards.directory: ${path.home}/kibana # The URL from where to download the dashboard archive. It is used instead of # the directory if it has a value. #setup.dashboards.url: # The file archive (zip file) from where to read the dashboards. It is used instead # of the directory when it has a value. #setup.dashboards.file: # In case the archive contains the dashboards from multiple Beats, this lets you # select which one to load. You can load all the dashboards in the archive by # setting this to the empty string. #setup.dashboards.beat: functionbeat # The name of the Kibana index to use for setting the configuration. Default is ".kibana" #setup.dashboards.kibana_index: .kibana # The Elasticsearch index name. This overwrites the index name defined in the # dashboards and index pattern. Example: testbeat-* #setup.dashboards.index: # Always use the Kibana API for loading the dashboards instead of autodetecting # how to install the dashboards by first querying Elasticsearch. #setup.dashboards.always_kibana: false # If true and Kibana is not reachable at the time when dashboards are loaded, # it will retry to reconnect to Kibana instead of exiting with an error. #setup.dashboards.retry.enabled: false # Duration interval between Kibana connection retries. #setup.dashboards.retry.interval: 1s # Maximum number of retries before exiting with an error, 0 for unlimited retrying. #setup.dashboards.retry.maximum: 0 # ================================== Template ================================== # A template is used to set the mapping in Elasticsearch # By default template loading is enabled and the template is loaded. # These settings can be adjusted to load your own template or overwrite existing ones. # Set to false to disable template loading. #setup.template.enabled: true # Template name. By default the template name is "functionbeat-%{[agent.version]}" # The template name and pattern has to be set in case the Elasticsearch index pattern is modified. #setup.template.name: "functionbeat-%{[agent.version]}" # Template pattern. By default the template pattern is "functionbeat-%{[agent.version]}" to apply to the default index settings. # The template name and pattern has to be set in case the Elasticsearch index pattern is modified. #setup.template.pattern: "functionbeat-%{[agent.version]}" # Path to fields.yml file to generate the template #setup.template.fields: "${path.config}/fields.yml" # A list of fields to be added to the template and Kibana index pattern. Also # specify setup.template.overwrite: true to overwrite the existing template. #setup.template.append_fields: #- name: field_name # type: field_type # Enable JSON template loading. If this is enabled, the fields.yml is ignored. #setup.template.json.enabled: false # Path to the JSON template file #setup.template.json.path: "${path.config}/template.json" # Name under which the template is stored in Elasticsearch #setup.template.json.name: "" # Set this option if the JSON template is a data stream. #setup.template.json.data_stream: false # Overwrite existing template # Do not enable this option for more than one instance of functionbeat as it might # overload your Elasticsearch with too many update requests. #setup.template.overwrite: false # Elasticsearch template settings setup.template.settings: # A dictionary of settings to place into the settings.index dictionary # of the Elasticsearch template. For more details, please check # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html #index: #number_of_shards: 1 #codec: best_compression # A dictionary of settings for the _source field. For more details, please check # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html #_source: #enabled: false # ====================== Index Lifecycle Management (ILM) ====================== # Configure index lifecycle management (ILM) to manage the backing indices # of your data streams. # Enable ILM support. Valid values are true, or false. #setup.ilm.enabled: true # Set the lifecycle policy name. The default policy name is # 'beatname'. #setup.ilm.policy_name: "mypolicy" # The path to a JSON file that contains a lifecycle policy configuration. Used # to load your own lifecycle policy. #setup.ilm.policy_file: # Disable the check for an existing lifecycle policy. The default is true. # If you set this option to false, lifecycle policy will not be installed, # even if setup.ilm.overwrite is set to true. #setup.ilm.check_exists: true # Overwrite the lifecycle policy at startup. The default is false. #setup.ilm.overwrite: false # ======================== Data Stream Lifecycle (DSL) ========================= # Configure Data Stream Lifecycle to manage data streams while connected to Serverless elasticsearch. # These settings are mutually exclusive with ILM settings which are not supported in Serverless projects. # Enable DSL support. Valid values are true, or false. #setup.dsl.enabled: true # Set the lifecycle policy name or pattern. For DSL, this name must match the data stream that the lifecycle is for. # The default data stream pattern is functionbeat-%{[agent.version]}" # The template string `%{[agent.version]}` will resolve to the current stack version. # The other possible template value is `%{[beat.name]}`. #setup.dsl.data_stream_pattern: "functionbeat-%{[agent.version]}" # The path to a JSON file that contains a lifecycle policy configuration. Used # to load your own lifecycle policy. # If no custom policy is specified, a default policy with a lifetime of 7 days will be created. #setup.dsl.policy_file: # Disable the check for an existing lifecycle policy. The default is true. If # you disable this check, set setup.dsl.overwrite: true so the lifecycle policy # can be installed. #setup.dsl.check_exists: true # Overwrite the lifecycle policy at startup. The default is false. #setup.dsl.overwrite: false # =================================== Kibana =================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. # This requires a Kibana endpoint configuration. setup.kibana: # Kibana Host # Scheme and port can be left out and will be set to the default (http and 5601) # In case you specify and additional path, the scheme is required: http://localhost:5601/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 #host: "localhost:5601" # Optional protocol and basic auth credentials. #protocol: "https" #username: "elastic" #password: "changeme" # Optional HTTP path #path: "" # Optional Kibana space ID. #space.id: "" # Custom HTTP headers to add to each request #headers: # X-My-Header: Contents of the header # Use SSL settings for HTTPS. #ssl.enabled: true # Controls the verification of certificates. Valid values are: # * full, which verifies that the provided certificate is signed by a trusted # authority (CA) and also verifies that the server's hostname (or IP address) # matches the names identified within the certificate. # * strict, which verifies that the provided certificate is signed by a trusted # authority (CA) and also verifies that the server's hostname (or IP address) # matches the names identified within the certificate. If the Subject Alternative # Name is empty, it returns an error. # * certificate, which verifies that the provided certificate is signed by a # trusted authority (CA), but does not perform any hostname verification. # * none, which performs no verification of the server's certificate. This # mode disables many of the security benefits of SSL/TLS and should only be used # after very careful consideration. It is primarily intended as a temporary # diagnostic mechanism when attempting to resolve TLS errors; its use in # production environments is strongly discouraged. # The default value is full. #ssl.verification_mode: full # List of supported/valid TLS versions. By default all TLS versions from 1.1 # up to 1.3 are enabled. #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client certificate key #ssl.key: "/etc/pki/client/cert.key" # Optional passphrase for decrypting the certificate key. #ssl.key_passphrase: '' # Configure cipher suites to be used for SSL connections #ssl.cipher_suites: [] # Configure curve types for ECDHE-based cipher suites #ssl.curve_types: [] # Configure what types of renegotiation are supported. Valid options are # never, once, and freely. Default is never. #ssl.renegotiation: never # Configure a pin that can be used to do extra validation of the verified certificate chain, # this allow you to ensure that a specific certificate is used to validate the chain of trust. # # The pin is a base64 encoded string of the SHA-256 fingerprint. #ssl.ca_sha256: "" # A root CA HEX encoded fingerprint. During the SSL handshake if the # fingerprint matches the root CA certificate, it will be added to # the provided list of root CAs (`certificate_authorities`), if the # list is empty or not defined, the matching certificate will be the # only one in the list. Then the normal SSL validation happens. #ssl.ca_trusted_fingerprint: "" # ================================== Logging =================================== # There are four options for the log output: file, stderr, syslog, eventlog # The file output is the default. # Sets log level. The default log level is info. # Available log levels are: error, warning, info, debug #logging.level: info # Enable debug output for selected components. To enable all selectors use ["*"] # Other available selectors are "beat", "publisher", "service" # Multiple selectors can be chained. #logging.selectors: [ ] # Send all logging output to stderr. The default is false. #logging.to_stderr: false # Send all logging output to syslog. The default is false. #logging.to_syslog: false # Send all logging output to Windows Event Logs. The default is false. #logging.to_eventlog: false # If enabled, Functionbeat periodically logs its internal metrics that have changed # in the last period. For each metric that changed, the delta from the value at # the beginning of the period is logged. Also, the total values for # all non-zero internal metrics are logged on shutdown. The default is true. #logging.metrics.enabled: true # The period after which to log the internal metrics. The default is 30s. #logging.metrics.period: 30s # A list of metrics namespaces to report in the logs. Defaults to [stats]. # `stats` contains general Beat metrics. `dataset` may be present in some # Beats and contains module or input metrics. #logging.metrics.namespaces: [stats] # Logging to rotating files. Set logging.to_files to false to disable logging to # files. logging.to_files: true logging.files: # Configure the path where the logs are written. The default is the logs directory # under the home path (the binary location). #path: /var/log/functionbeat # The name of the files where the logs are written to. #name: functionbeat # Configure log file size limit. If the limit is reached, log file will be # automatically rotated. #rotateeverybytes: 10485760 # = 10MB # Number of rotated log files to keep. The oldest files will be deleted first. #keepfiles: 7 # The permissions mask to apply when rotating log files. The default value is 0600. # Must be a valid Unix-style file permissions mask expressed in octal notation. #permissions: 0600 # Enable log file rotation on time intervals in addition to the size-based rotation. # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h # are boundary-aligned with minutes, hours, days, weeks, months, and years as # reported by the local system clock. All other intervals are calculated from the # Unix epoch. Defaults to disabled. #interval: 0 # Rotate existing logs on startup rather than appending them to the existing # file. Defaults to true. # rotateonstartup: true #=============================== Events Logging =============================== # Some outputs will log raw events on errors like indexing errors in the # Elasticsearch output, to prevent logging raw events (that may contain # sensitive information) together with other log messages, a different # log file, only for log entries containing raw events, is used. It will # use the same level, selectors and all other configurations from the # default logger, but it will have it's own file configuration. # # Having a different log file for raw events also prevents event data # from drowning out the regular log files. # # IMPORTANT: No matter the default logger output configuration, raw events # will **always** be logged to a file configured by `logging.event_data.files`. # logging.event_data: # Logging to rotating files. Set logging.to_files to false to disable logging to # files. #logging.event_data.to_files: true #logging.event_data: # Configure the path where the logs are written. The default is the logs directory # under the home path (the binary location). #path: /var/log/functionbeat # The name of the files where the logs are written to. #name: functionbeat-event-data # Configure log file size limit. If the limit is reached, log file will be # automatically rotated. #rotateeverybytes: 5242880 # = 5MB # Number of rotated log files to keep. The oldest files will be deleted first. #keepfiles: 2 # The permissions mask to apply when rotating log files. The default value is 0600. # Must be a valid Unix-style file permissions mask expressed in octal notation. #permissions: 0600 # Enable log file rotation on time intervals in addition to the size-based rotation. # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h # are boundary-aligned with minutes, hours, days, weeks, months, and years as # reported by the local system clock. All other intervals are calculated from the # Unix epoch. Defaults to disabled. #interval: 0 # Rotate existing logs on startup rather than appending them to the existing # file. Defaults to false. # rotateonstartup: false # ============================= X-Pack Monitoring ============================== # Functionbeat can export internal metrics to a central Elasticsearch monitoring # cluster. This requires xpack monitoring to be enabled in Elasticsearch. The # reporting is disabled by default. # Set to true to enable the monitoring reporter. #monitoring.enabled: false # Sets the UUID of the Elasticsearch cluster under which monitoring data for this # Functionbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch. #monitoring.cluster_uuid: # Uncomment to send the metrics to Elasticsearch. Most settings from the # Elasticsearch output are accepted here as well. # Note that the settings should point to your Elasticsearch *monitoring* cluster. # Any setting that is not set is automatically inherited from the Elasticsearch # output configuration, so if you have the Elasticsearch output configured such # that it is pointing to your Elasticsearch monitoring cluster, you can simply # uncomment the following line. #monitoring.elasticsearch: # Array of hosts to connect to. # Scheme and port can be left out and will be set to the default (http and 9200) # In case you specify an additional path, the scheme is required: http://localhost:9200/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 #hosts: ["localhost:9200"] # Set gzip compression level. #compression_level: 0 # Protocol - either `http` (default) or `https`. #protocol: "https" # Authentication credentials - either API key or username/password. #api_key: "id:api_key" #username: "beats_system" #password: "changeme" # Dictionary of HTTP parameters to pass within the URL with index operations. #parameters: #param1: value1 #param2: value2 # Custom HTTP headers to add to each request #headers: # X-My-Header: Contents of the header # Proxy server url #proxy_url: http://proxy:3128 # The number of times a particular Elasticsearch index operation is attempted. If # the indexing operation doesn't succeed after this many retries, the events are # dropped. The default is 3. #max_retries: 3 # The maximum number of events to bulk in a single Elasticsearch bulk API index request. # The default is 50. #bulk_max_size: 50 # The number of seconds to wait before trying to reconnect to Elasticsearch # after a network error. After waiting backoff.init seconds, the Beat # tries to reconnect. If the attempt fails, the backoff timer is increased # exponentially up to backoff.max. After a successful connection, the backoff # timer is reset. The default is 1s. #backoff.init: 1s # The maximum number of seconds to wait before attempting to connect to # Elasticsearch after a network error. The default is 60s. #backoff.max: 60s # Configure HTTP request timeout before failing a request to Elasticsearch. #timeout: 90 # Use SSL settings for HTTPS. #ssl.enabled: true # Controls the verification of certificates. Valid values are: # * full, which verifies that the provided certificate is signed by a trusted # authority (CA) and also verifies that the server's hostname (or IP address) # matches the names identified within the certificate. # * strict, which verifies that the provided certificate is signed by a trusted # authority (CA) and also verifies that the server's hostname (or IP address) # matches the names identified within the certificate. If the Subject Alternative # Name is empty, it returns an error. # * certificate, which verifies that the provided certificate is signed by a # trusted authority (CA), but does not perform any hostname verification. # * none, which performs no verification of the server's certificate. This # mode disables many of the security benefits of SSL/TLS and should only be used # after very careful consideration. It is primarily intended as a temporary # diagnostic mechanism when attempting to resolve TLS errors; its use in # production environments is strongly discouraged. # The default value is full. #ssl.verification_mode: full # List of supported/valid TLS versions. By default all TLS versions from 1.1 # up to 1.3 are enabled. #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client certificate key #ssl.key: "/etc/pki/client/cert.key" # Optional passphrase for decrypting the certificate key. #ssl.key_passphrase: '' # Configure cipher suites to be used for SSL connections #ssl.cipher_suites: [] # Configure curve types for ECDHE-based cipher suites #ssl.curve_types: [] # Configure what types of renegotiation are supported. Valid options are # never, once, and freely. Default is never. #ssl.renegotiation: never # Configure a pin that can be used to do extra validation of the verified certificate chain, # this allow you to ensure that a specific certificate is used to validate the chain of trust. # # The pin is a base64 encoded string of the SHA-256 fingerprint. #ssl.ca_sha256: "" # A root CA HEX encoded fingerprint. During the SSL handshake if the # fingerprint matches the root CA certificate, it will be added to # the provided list of root CAs (`certificate_authorities`), if the # list is empty or not defined, the matching certificate will be the # only one in the list. Then the normal SSL validation happens. #ssl.ca_trusted_fingerprint: "" # Enable Kerberos support. Kerberos is automatically enabled if any Kerberos setting is set. #kerberos.enabled: true # Authentication type to use with Kerberos. Available options: keytab, password. #kerberos.auth_type: password # Path to the keytab file. It is used when auth_type is set to keytab. #kerberos.keytab: /etc/elastic.keytab # Path to the Kerberos configuration. #kerberos.config_path: /etc/krb5.conf # Name of the Kerberos user. #kerberos.username: elastic # Password of the Kerberos user. It is used when auth_type is set to password. #kerberos.password: changeme # Kerberos realm. #kerberos.realm: ELASTIC #metrics.period: 10s #state.period: 1m # The `monitoring.cloud.id` setting overwrites the `monitoring.elasticsearch.hosts` # setting. You can find the value for this setting in the Elastic Cloud web UI. #monitoring.cloud.id: # The `monitoring.cloud.auth` setting overwrites the `monitoring.elasticsearch.username` # and `monitoring.elasticsearch.password` settings. The format is `<user>:<pass>`. #monitoring.cloud.auth: # =============================== HTTP Endpoint ================================ # Each beat can expose internal metrics through an HTTP endpoint. For security # reasons the endpoint is disabled by default. This feature is currently experimental. # Stats can be accessed through http://localhost:5066/stats. For pretty JSON output # append ?pretty to the URL. # Defines if the HTTP endpoint is enabled. #http.enabled: false # The HTTP endpoint will bind to this hostname, IP address, unix socket, or named pipe. # When using IP addresses, it is recommended to only use localhost. #http.host: localhost # Port on which the HTTP endpoint will bind. Default is 5066. #http.port: 5066 # Define which user should be owning the named pipe. #http.named_pipe.user: # Define which permissions should be applied to the named pipe, use the Security # Descriptor Definition Language (SDDL) to define the permission. This option cannot be used with # `http.user`. #http.named_pipe.security_descriptor: # Defines if the HTTP pprof endpoints are enabled. # It is recommended that this is only enabled on localhost as these endpoints may leak data. #http.pprof.enabled: false # Controls the fraction of goroutine blocking events that are reported in the # blocking profile. #http.pprof.block_profile_rate: 0 # Controls the fraction of memory allocations that are recorded and reported in # the memory profile. #http.pprof.mem_profile_rate: 524288 # Controls the fraction of mutex contention events that are reported in the # mutex profile. #http.pprof.mutex_profile_rate: 0 # ============================== Process Security ============================== # Enable or disable seccomp system call filtering on Linux. Default is enabled. #seccomp.enabled: true # ============================== Instrumentation =============================== # Instrumentation support for the functionbeat. #instrumentation: # Set to true to enable instrumentation of functionbeat. #enabled: false # Environment in which functionbeat is running on (eg: staging, production, etc.) #environment: "" # APM Server hosts to report instrumentation results to. #hosts: # - http://localhost:8200 # API Key for the APM Server(s). # If api_key is set then secret_token will be ignored. #api_key: # Secret token for the APM Server(s). #secret_token: # Enable profiling of the server, recording profile samples as events. # # This feature is experimental. #profiling: #cpu: # Set to true to enable CPU profiling. #enabled: false #interval: 60s #duration: 10s #heap: # Set to true to enable heap profiling. #enabled: false #interval: 60s # ================================= Migration ================================== # This allows to enable 6.7 migration aliases #migration.6_to_7.enabled: false # =============================== Feature Flags ================================ # Enable and configure feature flags. #features: # fqdn: # enabled: true