Configure functions
editConfigure functions
editFunctionbeat runs as a function in your serverless environment.
Before deploying Functionbeat, you need to configure one or more functions and specify details about the services that will trigger the functions.
You configure the functions in the the functionbeat.yml
configuration file.
When you’re done, you can deploy the functions
to your serverless environment.
The following example configures two functions: cloudwatch
and sqs
. The
cloudwatch
function collects events from CloudWatch Logs. The sqs
function
collects messages from Amazon Simple Queue Service (SQS). Both functions forward
the events to Elasticsearch.
functionbeat.provider.aws.endpoint: "s3.amazonaws.com" functionbeat.provider.aws.deploy_bucket: "functionbeat-deploy" functionbeat.provider.aws.functions: - name: cloudwatch enabled: true type: cloudwatch_logs description: "lambda function for cloudwatch logs" triggers: - log_group_name: /aws/lambda/my-lambda-function #filter_pattern: mylog_ - name: sqs enabled: true type: sqs description: "lambda function for SQS events" triggers: - event_source_arn: arn:aws:sqs:us-east-1:123456789012:myevents cloud.id: "MyESDeployment:SomeLongString==" cloud.auth: "elastic:SomeLongString" processors: - add_host_metadata: ~ - add_cloud_metadata: ~
Configuration options
editYou can specify the following options to configure the functions that you want to deploy.
If you change the configuration after deploying the function, use
the update
command to update your deployment.
provider.aws.endpoint
editAWS endpoint to use in the URL template to load functions.
provider.aws.deploy_bucket
editA unique name for the S3 bucket that the Lambda artifact will be uploaded to.
name
editA unique name for the Lambda function. This is the name of the function as it will appear in the Lambda console on AWS.
type
editThe type of service to monitor. For this release, the supported types are:
|
Collects events from CloudWatch logs. |
|
Collects data from Amazon Simple Queue Service (SQS). |
|
Collects data from a Kinesis stream. |
description
editA description of the function. This description is useful when you are running multiple functions and need more context about how each function is used.
triggers
editA list of triggers that will cause the function to execute. The list of valid
triggers depends on the type
:
-
For
cloudwatch_logs
, specify a list of log groups. Because the AWS limit is one subscription filter per CloudWatch log group, the log groups specified here must have no other subscription filters, or deployment will fail. For more information, see Deployment to AWS fails with "resource limit exceeded". -
For
sqs
orkinesis
, specify a list of Amazon Resource Names (ARNs).
filter_pattern
editA regular expression that matches the events you want to collect. Setting this option may reduce execution costs because the function only executes if there is data that matches the pattern.
concurrency
editThe reserved number of instances for the function. Setting this option may reduce execution costs by limiting the number of functions that can execute in your serverless environment. The default is unreserved.
memory_size
editThe maximum amount of memory to allocate for this function. Specify a value that is a factor of 64. There is a hard limit of 3008 MiB for each function. The default is 128 MiB.
role
editThe custom execution role to use for the deployed function. For example:
role: arn:aws:iam::123456789012:role/MyFunction
Make sure the custom role has the permissions required to run the function. For more information, see IAM permissions required for deployment.
If role
is not specified, the function uses the default role and policy
created during deployment.
virtual_private_cloud
editSpecifies additional settings required to connect to private resources in an Amazon Virtual Private Cloud (VPC). For example:
virtual_private_cloud: security_group_ids: - mySecurityGroup - anotherSecurityGroup subnet_ids: - myUniqueID
dead_letter_config.target_arn
editThe dead letter queue to use for messages that can’t be processed successfully. Set this option to an ARN that points to an SQS queue.
batch_size
editThe number of events to read from a Kinesis stream, the minimal values is 100 and the maximun is 10000. The default is 100.
starting_position
editThe starting position to read from a Kinesis stream, valids values are trim_horizon
and latest
.
The default is trim_horizon.
keep_null
editIf this option is set to true, fields with null
values will be published in
the output document. By default, keep_null
is set to false
.