- Observability: other versions:
- Get started
- What is Elastic Observability?
- What’s new in 8.17
- Quickstart: Monitor hosts with Elastic Agent
- Quickstart: Monitor your Kubernetes cluster with Elastic Agent
- Quickstart: Monitor hosts with OpenTelemetry
- Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)
- Quickstart: Collect data with AWS Firehose
- Add data from Splunk
- Applications and services
- Application performance monitoring (APM)
- Get started
- Learn about data types
- Collect application data
- View and analyze data
- Act on data
- Use APM securely
- Manage storage
- Configure APM Server
- Monitor APM Server
- APM APIs
- Troubleshooting
- Upgrade
- Release notes
- Known issues
- Synthetic monitoring
- Get started
- Scripting browser monitors
- Configure lightweight monitors
- Manage monitors
- Work with params and secrets
- Analyze monitor data
- Monitor resources on private networks
- Use the CLI
- Configure projects
- Multi-factor Authentication
- Configure Synthetics settings
- Grant users access to secured resources
- Manage data retention
- Use Synthetics with traffic filters
- Migrate from the Elastic Synthetics integration
- Scale and architect a deployment
- Synthetics support matrix
- Synthetics Encryption and Security
- Troubleshooting
- Real user monitoring
- Uptime monitoring (deprecated)
- Tutorial: Monitor a Java application
- Application performance monitoring (APM)
- CI/CD
- Cloud
- Infrastructure and hosts
- Logs
- Troubleshooting
- Incident management
- Data set quality
- Observability AI Assistant
- Reference
Monitor AWS Network Firewall logs
editMonitor AWS Network Firewall logs
editIn this section, you’ll learn how to send AWS Network Firewall log events from AWS to your Elastic stack using Amazon Data Firehose.
You will go through the following steps:
- Select a AWS Network Firewall-compatible resource
- Create a delivery stream in Amazon Data Firehose
- Set up logging to forward the logs to the Elastic stack using a Firehose stream
- Visualize your logs in Kibana
Before you begin
editWe assume that you already have:
- An AWS account with permissions to pull the necessary data from AWS.
- A deployment using our hosted Elasticsearch Service on Elastic Cloud. The deployment includes an Elasticsearch cluster for storing and searching your data, and Kibana for visualizing and managing your data. AWS Data Firehose works with Elastic Stack version 7.17 or greater, running on Elastic Cloud only.
AWS PrivateLink is not supported. Make sure the deployment is on AWS, because the Amazon Data Firehose delivery stream connects specifically to an endpoint that needs to be on AWS.
Step 1: Install AWS integration in Kibana
edit- Find Integrations in the main menu or use the global search field.
- Browse the catalog to find the AWS integration.
- Navigate to the Settings tab and click Install AWS assets.
Step 2: Select a resource
edit
You can either use an existing AWS Network Firewall, or create a new one for testing purposes.
Creating a Network Firewall is not trivial and is beyond the scope of this guide. For more information, check the AWS documentation on the Getting started with AWS Network Firewall guide.
Step 3: Create a stream in Amazon Data Firehose
edit
- Go to the AWS console and navigate to Amazon Data Firehose.
-
Click Create Firehose stream and choose the source and destination of your Firehose stream. Set source to
Direct PUT
and destination toElastic
. -
Collect Elasticsearch endpoint and API key from your deployment on Elastic Cloud.
-
To find the Elasticsearch endpoint URL:
- Go to the Elastic Cloud console
- Find your deployment in the Hosted deployments card and select Manage.
- Under Applications click Copy endpoint next to Elasticsearch.
-
Make sure that your Elasticsearch endpoint URL includes
.es.
between the deployment name and region. Example:https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com
-
To create the API key:
- Go to the Elastic Cloud console
- Select Open Kibana.
-
Expand the left-hand menu, under Management select Stack management > API Keys and click Create API key. If you are using an API key with Restrict privileges, make sure to review the Indices privileges to provide at least
auto_configure
andwrite
permissions for the indices you will be using with this delivery stream.
-
-
Set up the delivery stream by specifying the following data:
- Elastic endpoint URL: The URL that you copied in the previous step.
- API key: The API key that you created in the previous step.
- Content encoding: To reduce the data transfer costs, use GZIP encoding.
- Retry duration: A duration between 60 and 300 seconds should be suitable for most use cases.
- Backup settings: It is recommended to configure S3 backup for failed records. These backups can then be used to restore failed data ingestion caused by unforeseen service outages.
The Firehose stream is ready to send logs to our Elastic Cloud deployment.
Step 4: Enable logging
edit
The AWS Network Firewall logs have built-in logging support. It can send logs to Amazon S3, Amazon CloudWatch, and Amazon Kinesis Data Firehose.
To enable logging to Amazon Data Firehose:
- In the AWS console, navigate to the AWS Network Firewall service.
- Select the firewall for which you want to enable logging.
- In the Logging section, click Edit.
- Select the Send logs to option and choose Kinesis Data Firehose.
- Select the Firehose stream you created in the previous step.
- Click Save.
At this point, the Network Firewall will start sending logs to the Firehose stream.
Step 5: Visualize your Network Firewall logs in Kibana
edit
With the new logging settings in place, the Network Firewall starts sending log events to the Firehose stream.
Navigate to Kibana and choose Visualize your logs with Discover.

On this page