- Elastic integrations
- Integrations quick reference
- 1Password
- Abnormal Security
- ActiveMQ
- Active Directory Entity Analytics
- Admin By Request EPM integration
- Airflow
- Akamai
- Apache
- API (custom)
- Arbor Peakflow SP Logs
- Arista NG Firewall
- Atlassian
- Auditd
- Auth0
- authentik
- AWS
- Amazon CloudFront
- Amazon DynamoDB
- Amazon EBS
- Amazon EC2
- Amazon ECS
- Amazon EMR
- AWS API Gateway
- Amazon GuardDuty
- AWS Health
- Amazon Kinesis Data Firehose
- Amazon Kinesis Data Stream
- Amazon MQ
- Amazon Managed Streaming for Apache Kafka (MSK)
- Amazon NAT Gateway
- Amazon RDS
- Amazon Redshift
- Amazon S3
- Amazon S3 Storage Lens
- Amazon Security Lake
- Amazon SNS
- Amazon SQS
- Amazon VPC
- Amazon VPN
- AWS Bedrock
- AWS Billing
- AWS CloudTrail
- AWS CloudWatch
- AWS ELB
- AWS Fargate
- AWS Inspector
- AWS Lambda
- AWS Logs (custom)
- AWS Network Firewall
- AWS Route 53
- AWS Security Hub
- AWS Transit Gateway
- AWS Usage
- AWS WAF
- Azure
- Activity logs
- App Service
- Application Gateway
- Application Insights metrics
- Application Insights metrics overview
- Application State Insights metrics
- Azure logs (v2 preview)
- Azure OpenAI
- Billing metrics
- Container instance metrics
- Container registry metrics
- Container service metrics
- Custom Azure Logs
- Custom Blob Storage Input
- Database Account metrics
- Event Hub input
- Firewall logs
- Frontdoor
- Functions
- Microsoft Entra ID
- Monitor metrics
- Network Watcher VNet
- Network Watcher NSG
- Platform logs
- Resource metrics
- Spring Cloud logs
- Storage Account metrics
- Virtual machines metrics
- Virtual machines scaleset metrics
- Barracuda
- BeyondInsight and Password Safe Integration
- BitDefender
- Bitwarden
- blacklens.io
- Blue Coat Director Logs
- BBOT (Bighuge BLS OSINT Tool)
- Box Events
- Bravura Monitor
- Broadcom ProxySG
- Canva
- Cassandra
- CEL Custom API
- Ceph
- Check Point
- Cilium Tetragon
- CISA Known Exploited Vulnerabilities
- Cisco
- Cisco Meraki Metrics
- Citrix
- Claroty CTD
- Cloudflare
- Cloud Asset Inventory
- CockroachDB Metrics
- Common Event Format (CEF)
- Containerd
- CoreDNS
- Corelight
- Couchbase
- CouchDB
- Cribl
- CrowdStrike
- Cyberark
- Cybereason
- CylanceProtect Logs
- Custom Websocket logs
- Darktrace
- Data Exfiltration Detection
- DGA
- Digital Guardian
- Docker
- DomainTools Real Time Unified Feeds
- Elastic APM
- Elastic Fleet Server
- Elastic Security
- Elastic Stack monitoring
- Elasticsearch Service Billing
- Envoy Proxy
- ESET PROTECT
- ESET Threat Intelligence
- etcd
- Falco
- F5
- File Integrity Monitoring
- FireEye Network Security
- First EPSS
- Forcepoint Web Security
- ForgeRock
- Fortinet
- Gigamon
- GitHub
- GitLab
- Golang
- Google Cloud
- Custom GCS Input
- GCP
- GCP Audit logs
- GCP Billing metrics
- GCP Cloud Run metrics
- GCP CloudSQL metrics
- GCP Compute metrics
- GCP Dataproc metrics
- GCP DNS logs
- GCP Firestore metrics
- GCP Firewall logs
- GCP GKE metrics
- GCP Load Balancing metrics
- GCP Metrics Input
- GCP PubSub logs (custom)
- GCP PubSub metrics
- GCP Redis metrics
- GCP Security Command Center
- GCP Storage metrics
- GCP VPC Flow logs
- GCP Vertex AI
- GoFlow2 logs
- Hadoop
- HAProxy
- Hashicorp Vault
- HTTP Endpoint logs (custom)
- IBM MQ
- IIS
- Imperva
- InfluxDb
- Infoblox
- Iptables
- Istio
- Jamf Compliance Reporter
- Jamf Pro
- Jamf Protect
- Jolokia Input
- Journald logs (custom)
- JumpCloud
- Kafka
- Keycloak
- Kubernetes
- LastPass
- Lateral Movement Detection
- Linux Metrics
- Living off the Land Attack Detection
- Logs (custom)
- Lumos
- Lyve Cloud
- Mattermost
- Memcached
- Menlo Security
- Microsoft
- Microsoft 365
- Microsoft Defender for Cloud
- Microsoft Defender for Endpoint
- Microsoft DHCP
- Microsoft DNS Server
- Microsoft Entra ID Entity Analytics
- Microsoft Exchange Online Message Trace
- Microsoft Exchange Server
- Microsoft Graph Activity Logs
- Microsoft M365 Defender
- Microsoft Office 365 Metrics Integration
- Microsoft Sentinel
- Microsoft SQL Server
- Mimecast
- ModSecurity Audit
- MongoDB
- MongoDB Atlas
- MySQL
- Nagios XI
- NATS
- NetFlow Records
- Netskope
- Network Beaconing Identification
- Network Packet Capture
- Nginx
- Okta
- Oracle
- OpenAI
- OpenCanary
- Osquery
- Palo Alto
- pfSense
- PHP-FPM
- PingOne
- PingFederate
- Pleasant Password Server
- PostgreSQL
- Prometheus
- Proofpoint TAP
- Proofpoint On Demand
- Pulse Connect Secure
- Qualys VMDR
- QNAP NAS
- RabbitMQ Logs
- Radware DefensePro Logs
- Rapid7
- Redis
- Rubrik RSC Metrics Integration
- Sailpoint Identity Security Cloud
- Salesforce
- SentinelOne
- ServiceNow
- Slack Logs
- Snort
- Snyk
- SonicWall Firewall
- Sophos
- Spring Boot
- SpyCloud Enterprise Protection
- SQL Input
- Squid Logs
- SRX
- STAN
- Statsd Input
- Sublime Security
- Suricata
- StormShield SNS
- Symantec
- Symantec Endpoint Security
- Sysmon for Linux
- Sysdig
- Syslog Router Integration
- System
- System Audit
- Tanium
- TCP Logs (custom)
- Teleport
- Tenable
- Threat intelligence
- ThreatConnect
- Threat Map
- Thycotic Secret Server
- Tines
- Traefik
- Trellix
- Trend Micro
- TYCHON Agentless
- UDP Logs (custom)
- Universal Profiling
- Vectra Detect
- VMware
- WatchGuard Firebox
- WebSphere Application Server
- Windows
- Wiz
- Zeek
- ZeroFox
- Zero Networks
- ZooKeeper Metrics
- Zoom
- Zscaler
Data Exfiltration Detection
editData Exfiltration Detection
editVersion |
2.3.0 (View all) |
Compatible Kibana version(s) |
8.10.1 or higher |
Supported Serverless project types |
Security |
Subscription level |
Platinum |
Level of support |
Elastic |
The Data Exfiltration Detection (DED) package contains assets for detecting data exfiltration in network and file data. Data Exfiltration Detection package currently supports only unidirectional flows and does not yet accommodate bi-directional flows. This package requires a Platinum subscription. Please ensure that you have a Trial or Platinum level subscription installed on your cluster before proceeding. This package is licensed under Elastic License 2.0.
For more detailed information refer to the following blog:
Installation
edit- Upgrading: If upgrading from a version below v2.0.0, see the section v2.0.0 and beyond.
- Add the Integration Package: Install the package via Management > Integrations > Add Data Exfiltration Detection. Configure the integration name and agent policy. Click Save and Continue. (Note that this integration does not rely on an agent, and can be assigned to a policy without an agent.)
- Install assets: Install the assets by clicking Settings > Install Data Exfiltration Detection assets.
-
Check the health of the transform: The transform is scheduled to run every 30 minutes. This transform creates the index
ml_network_ded-<VERSION>
. To check the health of the transform go to Management > Stack Management > Data > Transforms underlogs-ded.pivot_transform-default-<FLEET-TRANSFORM-VERSION>
. Follow the instructions under the headerCustomize Data Exfiltration Detection Transform
below to adjust filters based on your environment’s needs. -
Create data views for anomaly detection jobs: The anomaly detection jobs under this package rely on two indices. One has file events (
logs-endpoint.events.file-*
), and the other index (ml_network_ded.all
) collects network logs from a transform. Before enabling the anomaly detection jobs, create a data view with both index patterns.- Go to Stack Management > Kibana > Data Views and click Create data view.
-
Enter the name of your respective index patterns in the Index pattern box, i.e.,
logs-endpoint.events.file-*, ml_network_ded.all
, and copy the same in the Name field. -
Select
@timestamp
under the Timestamp field and click on Save data view to Kibana. -
Use the new data view (
logs-endpoint.events.file-*, ml_network_ded.all
) to create anomaly detection jobs for this package.
-
Add preconfigured anomaly detection jobs: In Machine Learning > Anomaly Detection, when you create a job, you should see an option to
Use preconfigured jobs
with a card for Data Exfiltration Detection. When you select the card, you will see a pre-configured anomaly detection job that you can enable depending on what makes the most sense for your environment. Note: In the Machine Learning app, these configurations are available only when data exists that matches the query specified in the ded-ml file. For example, this would be available inlogs-endpoint.events.*
if you used Elastic Defend to collect events. -
Data view configuration for Dashboards: For the dashboard to work as expected, the following settings need to be configured in Kibana.
- You have started the above anomaly detection jobs.
-
You have read access to
.ml-anomalies-shared
index or are assigned themachine_learning_user
role. For more information on roles, please refer to Built-in roles in Elastic. Please be aware that a user who has access to the underlying machine learning results indices can see the results of all jobs in all spaces. Be mindful of granting permissions if you use Kibana spaces to control which users can see which machine learning results. For more information on machine learning privileges, refer to setup-privileges. -
After enabling the jobs, go to Management > Stack Management > Kibana > Data Views. Click on Create data view with the following settings:
-
Name:
.ml-anomalies-shared
-
Index pattern :
.ml-anomalies-shared
- Select Show Advanced settings enable Allow hidden and system indices
-
Custom data view ID:
.ml-anomalies-shared
-
Name:
-
Enable detection rules: You can also enable detection rules to alert on Data Exfiltration activity in your environment, based on anomalies flagged by the above ML jobs. As of version 2.0.0 of this package, these rules are available as part of the Detection Engine, and can be found using the tag
Use Case: Data Exfiltration Detection
. See this documentation for more information on importing and enabling the rules.

In Security > Rules, filtering with the “Use Case: Data Exfiltration Detection” tag
Transform
editTo inspect the installed assets, you can navigate to Stack Management > Data > Transforms.
Transform name | Purpose | Source index | Destination index | Alias |
---|---|---|---|---|
ded.pivot_transform |
Collects network logs from your environment |
logs-* |
ml_network_ded-[version] |
ml_network_ded.all |
When querying the destination index (ml_network_ded-<VERSION>
) for network logs, we advise using the alias for the destination index (ml_network_ded.all
). In the event that the underlying package is upgraded, the alias will aid in maintaining the previous findings.
Customize Data Exfiltration Detection Transform
editTo customize filters in the Data Exfiltration Detection transform, follow the below steps. You can use these instructions to add or remove filters for fields such as process.name
, source.ip
, destination.ip
, and others.
-
Go to Stack Management > Data > Transforms >
logs-ded.pivot_transform-default-<FLEET-TRANSFORM-VERSION>
. - Click on the Actions bar at the far right of the transform and select the Clone option. image::images/ded/ded_transform_1.png[Data Exfiltration Detection Rules]
-
In the new Clone transform window, go to the Search filter and update any field values you want to add or remove. Click on the Apply changes button on the right side to save these changes. Note: The image below shows an example of filtering a new
process.name
asexplorer.exe
. You can follow a similar example and update the field value list based on your environment to help reduce noise and potential false positives. image::images/ded/ded_transform_2.png[Data Exfiltration Detection Rules] - Scroll down and select the Next button at the bottom right. Under the Transform details section, enter a new Transform ID and Destination index of your choice, then click on the Next button. image::images/ded/ded_transform_3.png[Data Exfiltration Detection Rules]
- Lastly, select the Create and Start option. Your updated transform will now start collecting data. Note: Do not forget to update your data view based on the new Destination index you have just created. image::images/ded/ded_transform_4.png[Data Exfiltration Detection Rules]
Dashboard
editAfter the data view for the dashboard is configured, the Data Exfiltration Detection Dashboard is available under Analytics > Dashboard. This dashboard gives an overview of anomalies triggered for the data exfiltration detection package.
Anomaly Detection Jobs
editJob | Description |
---|---|
ded_high_sent_bytes_destination_geo_country_iso_code |
Detects data exfiltration to an unusual geo-location (by country iso code). |
ded_high_sent_bytes_destination_ip |
Detects data exfiltration to an unusual geo-location (by IP address). |
ded_high_sent_bytes_destination_port |
Detects data exfiltration to an unusual destination port. |
ded_high_sent_bytes_destination_region_name |
Detects data exfiltration to an unusual geo-location (by region name). |
ded_high_bytes_written_to_external_device |
Detects data exfiltration activity by identifying high bytes written to an external device. |
ded_rare_process_writing_to_external_device |
Detects data exfiltration activity by identifying a writing event started by a rare process to an external device. |
ded_high_bytes_written_to_external_device_airdrop |
Detects data exfiltration activity by identifying high bytes written to an external device via Airdrop. |
v2.0.0 and beyond
editv2.0.0 of the package introduces breaking changes, namely deprecating detection rules from the package. To continue receiving updates to Data Exfiltration Detection, we recommend upgrading to v2.0.0 after doing the following:
-
Delete existing ML jobs: Navigate to Machine Learning > Anomaly Detection and delete jobs corresponding to the following IDs:
- high-sent-bytes-destination-geo-country_iso_code
- high-sent-bytes-destination-ip
- high-sent-bytes-destination-port
- high-sent-bytes-destination-region_name
- high-bytes-written-to-external-device
- rare-process-writing-to-external-device
- high-bytes-written-to-external-device-airdrop
Depending on the version of the package you’re using, you might also be able to search for the above jobs using the group data_exfiltration
.
-
Uninstall existing rules associated with this package: Navigate to Security > Rules and delete the following rules:
- Potential Data Exfiltration Activity to an Unusual ISO Code
- Potential Data Exfiltration Activity to an Unusual Region
- Potential Data Exfiltration Activity to an Unusual IP Address
- Potential Data Exfiltration Activity to an Unusual Destination Port
- Spike in Bytes Sent to an External Device
- Spike in Bytes Sent to an External Device via Airdrop
- Unusual Process Writing Data to an External Device
Depending on the version of the package you’re using, you might also be able to search for the above rules using the tag Data Exfiltration
.
- Upgrade the Data Exfiltration Detection package to v2.0.0 using the steps here
- Install the new rules as described in the Enable detection rules section below.
In version 2.1.1, the package ignores data in cold and frozen data tiers to reduce heap memory usage, avoid running on outdated data, and to follow best practices.
Licensing
editUsage in production requires that you have a license key that permits use of machine learning features.
Changelog
editChangelog
Version | Details | Kibana version(s) |
---|---|---|
2.3.0 |
Enhancement (View pull request) |
8.10.1 or higher |
2.2.1 |
Enhancement (View pull request) |
8.10.1 or higher |
2.2.0 |
Enhancement (View pull request) |
8.10.1 or higher |
2.1.2 |
Enhancement (View pull request) |
8.9.0 or higher |
2.1.1 |
Enhancement (View pull request) |
8.9.0 or higher |
2.1.0 |
Enhancement (View pull request) |
8.9.0 or higher |
2.0.0 |
Enhancement (View pull request) |
8.9.0 or higher |
1.0.3 |
Enhancement (View pull request) |
8.5.0 or higher |
1.0.2 |
Enhancement (View pull request) |
8.5.0 or higher |
1.0.1 |
Enhancement (View pull request) |
8.5.0 or higher |
1.0.0 |
Enhancement (View pull request) |
8.5.0 or higher |
0.0.2 |
Enhancement (View pull request) |
— |
0.0.1 |
Enhancement (View pull request) |
— |
On this page