- Elasticsearch Guide: other versions:
- What’s new in 8.17
- Elasticsearch basics
- Quick starts
- Set up Elasticsearch
- Run Elasticsearch locally
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Miscellaneous cluster settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Data stream lifecycle settings
- Field data cache settings
- Local gateway settings
- Health Diagnostic settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- Inference settings
- License settings
- Machine learning settings
- Monitoring settings
- Node settings
- Networking
- Node query cache settings
- Path settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Set JVM options
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Dynamic mapping
- Explicit mapping
- Runtime fields
- Field data types
- Aggregate metric
- Alias
- Arrays
- Binary
- Boolean
- Completion
- Date
- Date nanoseconds
- Dense vector
- Flattened
- Geopoint
- Geoshape
- Histogram
- IP
- Join
- Keyword
- Nested
- Numeric
- Object
- Pass-through object
- Percolator
- Point
- Range
- Rank feature
- Rank features
- Search-as-you-type
- Semantic text
- Shape
- Sparse vector
- Text
- Token count
- Unsigned long
- Version
- Metadata fields
- Mapping parameters
analyzer
coerce
copy_to
doc_values
dynamic
eager_global_ordinals
enabled
format
ignore_above
index.mapping.ignore_above
ignore_malformed
index
index_options
index_phrases
index_prefixes
meta
fields
normalizer
norms
null_value
position_increment_gap
properties
search_analyzer
similarity
store
subobjects
term_vector
- Mapping limit settings
- Removal of mapping types
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Attachment
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- Geo-grid
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- IP Location
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Redact
- Registered domain
- Remove
- Rename
- Reroute
- Script
- Set
- Set security user
- Sort
- Split
- Terminate
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Ingest pipelines in Search
- Aliases
- Search your data
- Re-ranking
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Frequent item sets
- Geo-distance
- Geohash grid
- Geohex grid
- Geotile grid
- Global
- Histogram
- IP prefix
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Random sampler
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Time series
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Change point
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- Geospatial analysis
- Connectors
- EQL
- ES|QL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Tutorial: Customize built-in policies
- Tutorial: Automate rollover
- Index management in Kibana
- Overview
- Concepts
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Data tiers
- Autoscaling
- Monitor a cluster
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled automatically
- Manually configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- User profiles
- Realms
- Realm chains
- Security domains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- JWT authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Looking up users without authentication
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Role restriction
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watcher
- Cross-cluster replication
- Data store architecture
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Behavioral Analytics APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat component templates
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Prevalidate node removal
- Nodes reload secure settings
- Nodes stats
- Cluster Info
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Create or update desired nodes
- Get desired nodes
- Delete desired nodes
- Get desired balance
- Reset desired balance
- Cross-cluster replication APIs
- Connector APIs
- Create connector
- Delete connector
- Get connector
- List connectors
- Update connector API key id
- Update connector configuration
- Update connector index name
- Update connector features
- Update connector filtering
- Update connector name and description
- Update connector pipeline
- Update connector scheduling
- Update connector service type
- Create connector sync job
- Cancel connector sync job
- Delete connector sync job
- Get connector sync job
- List connector sync jobs
- Check in a connector
- Update connector error
- Update connector last sync stats
- Update connector status
- Check in connector sync job
- Claim connector sync job
- Set connector sync job error
- Set connector sync job stats
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- ES|QL APIs
- Features APIs
- Fleet APIs
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Resolve cluster
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Inference APIs
- Delete inference API
- Get inference API
- Perform inference API
- Create inference API
- Stream inference API
- Update inference API
- AlibabaCloud AI Search inference service
- Amazon Bedrock inference service
- Anthropic inference service
- Azure AI studio inference service
- Azure OpenAI inference service
- Cohere inference service
- Elasticsearch inference service
- ELSER inference service
- Google AI Studio inference service
- Google Vertex AI inference service
- HuggingFace inference service
- Mistral inference service
- OpenAI inference service
- Watsonx inference service
- Info API
- Ingest APIs
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Clear trained model deployment cache
- Create or update trained model aliases
- Create part of a trained model
- Create trained models
- Create trained model vocabulary
- Delete trained model aliases
- Delete trained models
- Get trained models
- Get trained models stats
- Infer trained model
- Start trained model deployment
- Stop trained model deployment
- Update trained model deployment
- Migration APIs
- Node lifecycle APIs
- Query rules APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Root API
- Script APIs
- Search APIs
- Search Application APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Bulk create or update roles API
- Bulk delete roles API
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Query Role
- Get service accounts
- Get service account credentials
- Get Security settings
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- Query User
- Update API key
- Update Security settings
- Bulk update API keys
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Activate user profile
- Disable user profile
- Enable user profile
- Get user profiles
- Suggest user profile
- Update user profile data
- Has privileges user profile
- Create Cross-Cluster API key
- Update Cross-Cluster API key
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Synonyms APIs
- Text structure APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- Optimizations
- Troubleshooting
- Fix common cluster issues
- Diagnose unassigned shards
- Add a missing tier to the system
- Allow Elasticsearch to allocate the data in the system
- Allow Elasticsearch to allocate the index
- Indices mix index allocation filters with data tiers node roles to move through data tiers
- Not enough nodes to allocate all shard replicas
- Total number of shards for an index on a single node exceeded
- Total number of shards per node has been reached
- Troubleshooting corruption
- Fix data nodes out of disk
- Fix master nodes out of disk
- Fix other role nodes out of disk
- Start index lifecycle management
- Start Snapshot Lifecycle Management
- Restore from snapshot
- Troubleshooting broken repositories
- Addressing repeated snapshot policy failures
- Troubleshooting an unstable cluster
- Troubleshooting discovery
- Troubleshooting monitoring
- Troubleshooting transforms
- Troubleshooting Watcher
- Troubleshooting searches
- Troubleshooting shards capacity health issues
- Troubleshooting an unbalanced cluster
- Capture diagnostics
- Migration guide
- Release notes
- Elasticsearch version 8.17.1
- Elasticsearch version 8.17.0
- Elasticsearch version 8.16.2
- Elasticsearch version 8.16.1
- Elasticsearch version 8.16.0
- Elasticsearch version 8.15.5
- Elasticsearch version 8.15.4
- Elasticsearch version 8.15.3
- Elasticsearch version 8.15.2
- Elasticsearch version 8.15.1
- Elasticsearch version 8.15.0
- Elasticsearch version 8.14.3
- Elasticsearch version 8.14.2
- Elasticsearch version 8.14.1
- Elasticsearch version 8.14.0
- Elasticsearch version 8.13.4
- Elasticsearch version 8.13.3
- Elasticsearch version 8.13.2
- Elasticsearch version 8.13.1
- Elasticsearch version 8.13.0
- Elasticsearch version 8.12.2
- Elasticsearch version 8.12.1
- Elasticsearch version 8.12.0
- Elasticsearch version 8.11.4
- Elasticsearch version 8.11.3
- Elasticsearch version 8.11.2
- Elasticsearch version 8.11.1
- Elasticsearch version 8.11.0
- Elasticsearch version 8.10.4
- Elasticsearch version 8.10.3
- Elasticsearch version 8.10.2
- Elasticsearch version 8.10.1
- Elasticsearch version 8.10.0
- Elasticsearch version 8.9.2
- Elasticsearch version 8.9.1
- Elasticsearch version 8.9.0
- Elasticsearch version 8.8.2
- Elasticsearch version 8.8.1
- Elasticsearch version 8.8.0
- Elasticsearch version 8.7.1
- Elasticsearch version 8.7.0
- Elasticsearch version 8.6.2
- Elasticsearch version 8.6.1
- Elasticsearch version 8.6.0
- Elasticsearch version 8.5.3
- Elasticsearch version 8.5.2
- Elasticsearch version 8.5.1
- Elasticsearch version 8.5.0
- Elasticsearch version 8.4.3
- Elasticsearch version 8.4.2
- Elasticsearch version 8.4.1
- Elasticsearch version 8.4.0
- Elasticsearch version 8.3.3
- Elasticsearch version 8.3.2
- Elasticsearch version 8.3.1
- Elasticsearch version 8.3.0
- Elasticsearch version 8.2.3
- Elasticsearch version 8.2.2
- Elasticsearch version 8.2.1
- Elasticsearch version 8.2.0
- Elasticsearch version 8.1.3
- Elasticsearch version 8.1.2
- Elasticsearch version 8.1.1
- Elasticsearch version 8.1.0
- Elasticsearch version 8.0.1
- Elasticsearch version 8.0.0
- Elasticsearch version 8.0.0-rc2
- Elasticsearch version 8.0.0-rc1
- Elasticsearch version 8.0.0-beta1
- Elasticsearch version 8.0.0-alpha2
- Elasticsearch version 8.0.0-alpha1
- Dependencies and versions
EQL search
editEQL search
editEvent Query Language (EQL) is a query language for event-based time series data, such as logs, metrics, and traces.
Advantages of EQL
edit-
EQL lets you express relationships between events.
Many query languages allow you to match single events. EQL lets you match a sequence of events across different event categories and time spans. -
EQL has a low learning curve.
EQL syntax looks like other common query languages, such as SQL. EQL lets you write and read queries intuitively, which makes for quick, iterative searching. -
EQL is designed for security use cases.
While you can use it for any event-based data, we created EQL for threat hunting. EQL not only supports indicator of compromise (IOC) searches but can describe activity that goes beyond IOCs.
Required fields
editWith the exception of sample queries, EQL searches require that the searched
data stream or index contains a timestamp field. By default, EQL uses the
@timestamp
field from the Elastic Common Schema (ECS).
EQL searches also require an event category field, unless you use the
any
keyword to search for documents
without an event category field. By default, EQL uses the ECS event.category
field.
To use a different timestamp or event category field, see Specify a timestamp or event category field.
While no schema is required to use EQL, we recommend using the ECS. EQL searches are designed to work with core ECS fields by default.
Run an EQL search
editUse the EQL search API to run a basic EQL query.
resp = client.eql.search( index="my-data-stream", query="\n process where process.name == \"regsvr32.exe\"\n ", ) print(resp)
response = client.eql.search( index: 'my-data-stream', body: { query: "\n process where process.name == \"regsvr32.exe\"\n " } ) puts response
const response = await client.eql.search({ index: "my-data-stream", query: '\n process where process.name == "regsvr32.exe"\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "query": """ process where process.name == "regsvr32.exe" """ }
By default, basic EQL queries return the 10 most recent matching events in the
hits.events
property. These hits are sorted by timestamp, converted to
milliseconds since the Unix epoch, in ascending order.
{ "is_partial": false, "is_running": false, "took": 60, "timed_out": false, "hits": { "total": { "value": 2, "relation": "eq" }, "events": [ { "_index": ".ds-my-data-stream-2099.12.07-000001", "_id": "OQmfCaduce8zoHT93o4H", "_source": { "@timestamp": "2099-12-07T11:07:09.000Z", "event": { "category": "process", "id": "aR3NWVOs", "sequence": 4 }, "process": { "pid": 2012, "name": "regsvr32.exe", "command_line": "regsvr32.exe /s /u /i:https://...RegSvr32.sct scrobj.dll", "executable": "C:\\Windows\\System32\\regsvr32.exe" } } }, { "_index": ".ds-my-data-stream-2099.12.07-000001", "_id": "xLkCaj4EujzdNSxfYLbO", "_source": { "@timestamp": "2099-12-07T11:07:10.000Z", "event": { "category": "process", "id": "GTSmSqgz0U", "sequence": 6, "type": "termination" }, "process": { "pid": 2012, "name": "regsvr32.exe", "executable": "C:\\Windows\\System32\\regsvr32.exe" } } } ] } }
Use the size
parameter to get a smaller or larger set of hits:
resp = client.eql.search( index="my-data-stream", query="\n process where process.name == \"regsvr32.exe\"\n ", size=50, ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", query: '\n process where process.name == "regsvr32.exe"\n ', size: 50, }); console.log(response);
GET /my-data-stream/_eql/search { "query": """ process where process.name == "regsvr32.exe" """, "size": 50 }
Search for a sequence of events
editUse EQL’s sequence syntax to search for a series of ordered events. List the event items in ascending chronological order, with the most recent event listed last:
resp = client.eql.search( index="my-data-stream", query="\n sequence\n [ process where process.name == \"regsvr32.exe\" ]\n [ file where stringContains(file.name, \"scrobj.dll\") ]\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", query: '\n sequence\n [ process where process.name == "regsvr32.exe" ]\n [ file where stringContains(file.name, "scrobj.dll") ]\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "query": """ sequence [ process where process.name == "regsvr32.exe" ] [ file where stringContains(file.name, "scrobj.dll") ] """ }
The response’s hits.sequences
property contains the 10 most recent matching
sequences.
{ ... "hits": { "total": ..., "sequences": [ { "events": [ { "_index": ".ds-my-data-stream-2099.12.07-000001", "_id": "OQmfCaduce8zoHT93o4H", "_source": { "@timestamp": "2099-12-07T11:07:09.000Z", "event": { "category": "process", "id": "aR3NWVOs", "sequence": 4 }, "process": { "pid": 2012, "name": "regsvr32.exe", "command_line": "regsvr32.exe /s /u /i:https://...RegSvr32.sct scrobj.dll", "executable": "C:\\Windows\\System32\\regsvr32.exe" } } }, { "_index": ".ds-my-data-stream-2099.12.07-000001", "_id": "yDwnGIJouOYGBzP0ZE9n", "_source": { "@timestamp": "2099-12-07T11:07:10.000Z", "event": { "category": "file", "id": "tZ1NWVOs", "sequence": 5 }, "process": { "pid": 2012, "name": "regsvr32.exe", "executable": "C:\\Windows\\System32\\regsvr32.exe" }, "file": { "path": "C:\\Windows\\System32\\scrobj.dll", "name": "scrobj.dll" } } } ] } ] } }
Use with maxspan
to constrain matching sequences
to a timespan:
resp = client.eql.search( index="my-data-stream", query="\n sequence with maxspan=1h\n [ process where process.name == \"regsvr32.exe\" ]\n [ file where stringContains(file.name, \"scrobj.dll\") ]\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", query: '\n sequence with maxspan=1h\n [ process where process.name == "regsvr32.exe" ]\n [ file where stringContains(file.name, "scrobj.dll") ]\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "query": """ sequence with maxspan=1h [ process where process.name == "regsvr32.exe" ] [ file where stringContains(file.name, "scrobj.dll") ] """ }
Use !
to match missing events: events in a sequence
that do not meet a condition within a given timespan:
resp = client.eql.search( index="my-data-stream", query="\n sequence with maxspan=1d\n [ process where process.name == \"cmd.exe\" ]\n ![ process where stringContains(process.command_line, \"ocx\") ]\n [ file where stringContains(file.name, \"scrobj.dll\") ]\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", query: '\n sequence with maxspan=1d\n [ process where process.name == "cmd.exe" ]\n ![ process where stringContains(process.command_line, "ocx") ]\n [ file where stringContains(file.name, "scrobj.dll") ]\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "query": """ sequence with maxspan=1d [ process where process.name == "cmd.exe" ] ![ process where stringContains(process.command_line, "ocx") ] [ file where stringContains(file.name, "scrobj.dll") ] """ }
Missing events are indicated in the response as missing": true
:
{ ... "hits": { "total": ..., "sequences": [ { "events": [ { "_index": ".ds-my-data-stream-2023.07.04-000001", "_id": "AnpTIYkBrVQ2QEgsWg94", "_source": { "@timestamp": "2099-12-07T11:06:07.000Z", "event": { "category": "process", "id": "cMyt5SZ2", "sequence": 3 }, "process": { "pid": 2012, "name": "cmd.exe", "executable": "C:\\Windows\\System32\\cmd.exe" } } }, { "_index": "", "_id": "", "_source": {}, "missing": true }, { "_index": ".ds-my-data-stream-2023.07.04-000001", "_id": "BHpTIYkBrVQ2QEgsWg94", "_source": { "@timestamp": "2099-12-07T11:07:10.000Z", "event": { "category": "file", "id": "tZ1NWVOs", "sequence": 5 }, "process": { "pid": 2012, "name": "regsvr32.exe", "executable": "C:\\Windows\\System32\\regsvr32.exe" }, "file": { "path": "C:\\Windows\\System32\\scrobj.dll", "name": "scrobj.dll" } } } ] } ] } }
Use the by
keyword to match events that share the
same field values:
resp = client.eql.search( index="my-data-stream", query="\n sequence with maxspan=1h\n [ process where process.name == \"regsvr32.exe\" ] by process.pid\n [ file where stringContains(file.name, \"scrobj.dll\") ] by process.pid\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", query: '\n sequence with maxspan=1h\n [ process where process.name == "regsvr32.exe" ] by process.pid\n [ file where stringContains(file.name, "scrobj.dll") ] by process.pid\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "query": """ sequence with maxspan=1h [ process where process.name == "regsvr32.exe" ] by process.pid [ file where stringContains(file.name, "scrobj.dll") ] by process.pid """ }
If a field value should be shared across all events, use the sequence by
keyword. The following query is equivalent to the previous one.
resp = client.eql.search( index="my-data-stream", query="\n sequence by process.pid with maxspan=1h\n [ process where process.name == \"regsvr32.exe\" ]\n [ file where stringContains(file.name, \"scrobj.dll\") ]\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", query: '\n sequence by process.pid with maxspan=1h\n [ process where process.name == "regsvr32.exe" ]\n [ file where stringContains(file.name, "scrobj.dll") ]\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "query": """ sequence by process.pid with maxspan=1h [ process where process.name == "regsvr32.exe" ] [ file where stringContains(file.name, "scrobj.dll") ] """ }
The hits.sequences.join_keys
property contains the shared field values.
{ ... "hits": ..., "sequences": [ { "join_keys": [ 2012 ], "events": ... } ] } }
Use the until
keyword to specify an expiration
event for sequences. Matching sequences must end before this event.
resp = client.eql.search( index="my-data-stream", query="\n sequence by process.pid with maxspan=1h\n [ process where process.name == \"regsvr32.exe\" ]\n [ file where stringContains(file.name, \"scrobj.dll\") ]\n until [ process where event.type == \"termination\" ]\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", query: '\n sequence by process.pid with maxspan=1h\n [ process where process.name == "regsvr32.exe" ]\n [ file where stringContains(file.name, "scrobj.dll") ]\n until [ process where event.type == "termination" ]\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "query": """ sequence by process.pid with maxspan=1h [ process where process.name == "regsvr32.exe" ] [ file where stringContains(file.name, "scrobj.dll") ] until [ process where event.type == "termination" ] """ }
Sample chronologically unordered events
editUse EQL’s sample syntax to search for events that match one or more join keys and a set of filters. Samples are similar to sequences, but do not return events in chronological order. In fact, sample queries can run on data without a timestamp. Sample queries can be useful to find correlations in events that don’t always occur in the same sequence, or that occur across long time spans.
Click to show the sample data used in the examples below
resp = client.indices.create( index="my-index-000001", mappings={ "properties": { "ip": { "type": "ip" }, "version": { "type": "version" }, "missing_keyword": { "type": "keyword" }, "@timestamp": { "type": "date" }, "type_test": { "type": "keyword" }, "@timestamp_pretty": { "type": "date", "format": "dd-MM-yyyy" }, "event_type": { "type": "keyword" }, "event": { "properties": { "category": { "type": "alias", "path": "event_type" } } }, "host": { "type": "keyword" }, "os": { "type": "keyword" }, "bool": { "type": "boolean" }, "uptime": { "type": "long" }, "port": { "type": "long" } } }, ) print(resp) resp1 = client.indices.create( index="my-index-000002", mappings={ "properties": { "ip": { "type": "ip" }, "@timestamp": { "type": "date" }, "@timestamp_pretty": { "type": "date", "format": "yyyy-MM-dd" }, "type_test": { "type": "keyword" }, "event_type": { "type": "keyword" }, "event": { "properties": { "category": { "type": "alias", "path": "event_type" } } }, "host": { "type": "keyword" }, "op_sys": { "type": "keyword" }, "bool": { "type": "boolean" }, "uptime": { "type": "long" }, "port": { "type": "long" } } }, ) print(resp1) resp2 = client.indices.create( index="my-index-000003", mappings={ "properties": { "host_ip": { "type": "ip" }, "@timestamp": { "type": "date" }, "date": { "type": "date" }, "event_type": { "type": "keyword" }, "event": { "properties": { "category": { "type": "alias", "path": "event_type" } } }, "missing_keyword": { "type": "keyword" }, "host": { "type": "keyword" }, "os": { "type": "keyword" }, "bool": { "type": "boolean" }, "uptime": { "type": "long" }, "port": { "type": "long" } } }, ) print(resp2) resp3 = client.bulk( index="my-index-000001", refresh=True, operations=[ { "index": { "_id": 1 } }, { "@timestamp": "1234567891", "@timestamp_pretty": "12-12-2022", "missing_keyword": "test", "type_test": "abc", "ip": "10.0.0.1", "event_type": "alert", "host": "doom", "uptime": 0, "port": 1234, "os": "win10", "version": "1.0.0", "id": 11 }, { "index": { "_id": 2 } }, { "@timestamp": "1234567892", "@timestamp_pretty": "13-12-2022", "event_type": "alert", "type_test": "abc", "host": "CS", "uptime": 5, "port": 1, "os": "win10", "version": "1.2.0", "id": 12 }, { "index": { "_id": 3 } }, { "@timestamp": "1234567893", "@timestamp_pretty": "12-12-2022", "event_type": "alert", "type_test": "abc", "host": "farcry", "uptime": 1, "port": 1234, "bool": False, "os": "win10", "version": "2.0.0", "id": 13 }, { "index": { "_id": 4 } }, { "@timestamp": "1234567894", "@timestamp_pretty": "13-12-2022", "event_type": "alert", "type_test": "abc", "host": "GTA", "uptime": 3, "port": 12, "os": "slack", "version": "10.0.0", "id": 14 }, { "index": { "_id": 5 } }, { "@timestamp": "1234567895", "@timestamp_pretty": "17-12-2022", "event_type": "alert", "host": "sniper 3d", "uptime": 6, "port": 1234, "os": "fedora", "version": "20.1.0", "id": 15 }, { "index": { "_id": 6 } }, { "@timestamp": "1234568896", "@timestamp_pretty": "17-12-2022", "event_type": "alert", "host": "doom", "port": 65123, "bool": True, "os": "redhat", "version": "20.10.0", "id": 16 }, { "index": { "_id": 7 } }, { "@timestamp": "1234567897", "@timestamp_pretty": "17-12-2022", "missing_keyword": "yyy", "event_type": "failure", "host": "doom", "uptime": 15, "port": 1234, "bool": True, "os": "redhat", "version": "20.2.0", "id": 17 }, { "index": { "_id": 8 } }, { "@timestamp": "1234567898", "@timestamp_pretty": "12-12-2022", "missing_keyword": "test", "event_type": "success", "host": "doom", "uptime": 16, "port": 512, "os": "win10", "version": "1.2.3", "id": 18 }, { "index": { "_id": 9 } }, { "@timestamp": "1234567899", "@timestamp_pretty": "15-12-2022", "missing_keyword": "test", "event_type": "success", "host": "GTA", "port": 12, "bool": True, "os": "win10", "version": "1.2.3", "id": 19 }, { "index": { "_id": 10 } }, { "@timestamp": "1234567893", "missing_keyword": None, "ip": "10.0.0.5", "event_type": "alert", "host": "farcry", "uptime": 1, "port": 1234, "bool": True, "os": "win10", "version": "1.2.3", "id": 110 } ], ) print(resp3) resp4 = client.bulk( index="my-index-000002", refresh=True, operations=[ { "index": { "_id": 1 } }, { "@timestamp": "1234567991", "type_test": "abc", "ip": "10.0.0.1", "event_type": "alert", "host": "doom", "uptime": 0, "port": 1234, "op_sys": "win10", "id": 21 }, { "index": { "_id": 2 } }, { "@timestamp": "1234567992", "type_test": "abc", "event_type": "alert", "host": "CS", "uptime": 5, "port": 1, "op_sys": "win10", "id": 22 }, { "index": { "_id": 3 } }, { "@timestamp": "1234567993", "type_test": "abc", "@timestamp_pretty": "2022-12-17", "event_type": "alert", "host": "farcry", "uptime": 1, "port": 1234, "bool": False, "op_sys": "win10", "id": 23 }, { "index": { "_id": 4 } }, { "@timestamp": "1234567994", "event_type": "alert", "host": "GTA", "uptime": 3, "port": 12, "op_sys": "slack", "id": 24 }, { "index": { "_id": 5 } }, { "@timestamp": "1234567995", "event_type": "alert", "host": "sniper 3d", "uptime": 6, "port": 1234, "op_sys": "fedora", "id": 25 }, { "index": { "_id": 6 } }, { "@timestamp": "1234568996", "@timestamp_pretty": "2022-12-17", "ip": "10.0.0.5", "event_type": "alert", "host": "doom", "port": 65123, "bool": True, "op_sys": "redhat", "id": 26 }, { "index": { "_id": 7 } }, { "@timestamp": "1234567997", "@timestamp_pretty": "2022-12-17", "event_type": "failure", "host": "doom", "uptime": 15, "port": 1234, "bool": True, "op_sys": "redhat", "id": 27 }, { "index": { "_id": 8 } }, { "@timestamp": "1234567998", "ip": "10.0.0.1", "event_type": "success", "host": "doom", "uptime": 16, "port": 512, "op_sys": "win10", "id": 28 }, { "index": { "_id": 9 } }, { "@timestamp": "1234567999", "ip": "10.0.0.1", "event_type": "success", "host": "GTA", "port": 12, "bool": False, "op_sys": "win10", "id": 29 } ], ) print(resp4) resp5 = client.bulk( index="my-index-000003", refresh=True, operations=[ { "index": { "_id": 1 } }, { "@timestamp": "1334567891", "host_ip": "10.0.0.1", "event_type": "alert", "host": "doom", "uptime": 0, "port": 12, "os": "win10", "id": 31 }, { "index": { "_id": 2 } }, { "@timestamp": "1334567892", "event_type": "alert", "host": "CS", "os": "win10", "id": 32 }, { "index": { "_id": 3 } }, { "@timestamp": "1334567893", "event_type": "alert", "host": "farcry", "bool": True, "os": "win10", "id": 33 }, { "index": { "_id": 4 } }, { "@timestamp": "1334567894", "event_type": "alert", "host": "GTA", "os": "slack", "bool": True, "id": 34 }, { "index": { "_id": 5 } }, { "@timestamp": "1234567895", "event_type": "alert", "host": "sniper 3d", "os": "fedora", "id": 35 }, { "index": { "_id": 6 } }, { "@timestamp": "1234578896", "host_ip": "10.0.0.1", "event_type": "alert", "host": "doom", "bool": True, "os": "redhat", "id": 36 }, { "index": { "_id": 7 } }, { "@timestamp": "1234567897", "event_type": "failure", "missing_keyword": "test", "host": "doom", "bool": True, "os": "redhat", "id": 37 }, { "index": { "_id": 8 } }, { "@timestamp": "1234577898", "event_type": "success", "host": "doom", "os": "win10", "id": 38, "date": "1671235200000" }, { "index": { "_id": 9 } }, { "@timestamp": "1234577899", "host_ip": "10.0.0.5", "event_type": "success", "host": "GTA", "bool": True, "os": "win10", "id": 39 } ], ) print(resp5)
response = client.indices.create( index: 'my-index-000001', body: { mappings: { properties: { ip: { type: 'ip' }, version: { type: 'version' }, missing_keyword: { type: 'keyword' }, "@timestamp": { type: 'date' }, type_test: { type: 'keyword' }, "@timestamp_pretty": { type: 'date', format: 'dd-MM-yyyy' }, event_type: { type: 'keyword' }, event: { properties: { category: { type: 'alias', path: 'event_type' } } }, host: { type: 'keyword' }, os: { type: 'keyword' }, bool: { type: 'boolean' }, uptime: { type: 'long' }, port: { type: 'long' } } } } ) puts response response = client.indices.create( index: 'my-index-000002', body: { mappings: { properties: { ip: { type: 'ip' }, "@timestamp": { type: 'date' }, "@timestamp_pretty": { type: 'date', format: 'yyyy-MM-dd' }, type_test: { type: 'keyword' }, event_type: { type: 'keyword' }, event: { properties: { category: { type: 'alias', path: 'event_type' } } }, host: { type: 'keyword' }, op_sys: { type: 'keyword' }, bool: { type: 'boolean' }, uptime: { type: 'long' }, port: { type: 'long' } } } } ) puts response response = client.indices.create( index: 'my-index-000003', body: { mappings: { properties: { host_ip: { type: 'ip' }, "@timestamp": { type: 'date' }, date: { type: 'date' }, event_type: { type: 'keyword' }, event: { properties: { category: { type: 'alias', path: 'event_type' } } }, missing_keyword: { type: 'keyword' }, host: { type: 'keyword' }, os: { type: 'keyword' }, bool: { type: 'boolean' }, uptime: { type: 'long' }, port: { type: 'long' } } } } ) puts response response = client.bulk( index: 'my-index-000001', refresh: true, body: [ { index: { _id: 1 } }, { "@timestamp": '1234567891', "@timestamp_pretty": '12-12-2022', missing_keyword: 'test', type_test: 'abc', ip: '10.0.0.1', event_type: 'alert', host: 'doom', uptime: 0, port: 1234, os: 'win10', version: '1.0.0', id: 11 }, { index: { _id: 2 } }, { "@timestamp": '1234567892', "@timestamp_pretty": '13-12-2022', event_type: 'alert', type_test: 'abc', host: 'CS', uptime: 5, port: 1, os: 'win10', version: '1.2.0', id: 12 }, { index: { _id: 3 } }, { "@timestamp": '1234567893', "@timestamp_pretty": '12-12-2022', event_type: 'alert', type_test: 'abc', host: 'farcry', uptime: 1, port: 1234, bool: false, os: 'win10', version: '2.0.0', id: 13 }, { index: { _id: 4 } }, { "@timestamp": '1234567894', "@timestamp_pretty": '13-12-2022', event_type: 'alert', type_test: 'abc', host: 'GTA', uptime: 3, port: 12, os: 'slack', version: '10.0.0', id: 14 }, { index: { _id: 5 } }, { "@timestamp": '1234567895', "@timestamp_pretty": '17-12-2022', event_type: 'alert', host: 'sniper 3d', uptime: 6, port: 1234, os: 'fedora', version: '20.1.0', id: 15 }, { index: { _id: 6 } }, { "@timestamp": '1234568896', "@timestamp_pretty": '17-12-2022', event_type: 'alert', host: 'doom', port: 65_123, bool: true, os: 'redhat', version: '20.10.0', id: 16 }, { index: { _id: 7 } }, { "@timestamp": '1234567897', "@timestamp_pretty": '17-12-2022', missing_keyword: 'yyy', event_type: 'failure', host: 'doom', uptime: 15, port: 1234, bool: true, os: 'redhat', version: '20.2.0', id: 17 }, { index: { _id: 8 } }, { "@timestamp": '1234567898', "@timestamp_pretty": '12-12-2022', missing_keyword: 'test', event_type: 'success', host: 'doom', uptime: 16, port: 512, os: 'win10', version: '1.2.3', id: 18 }, { index: { _id: 9 } }, { "@timestamp": '1234567899', "@timestamp_pretty": '15-12-2022', missing_keyword: 'test', event_type: 'success', host: 'GTA', port: 12, bool: true, os: 'win10', version: '1.2.3', id: 19 }, { index: { _id: 10 } }, { "@timestamp": '1234567893', missing_keyword: nil, ip: '10.0.0.5', event_type: 'alert', host: 'farcry', uptime: 1, port: 1234, bool: true, os: 'win10', version: '1.2.3', id: 110 } ] ) puts response response = client.bulk( index: 'my-index-000002', refresh: true, body: [ { index: { _id: 1 } }, { "@timestamp": '1234567991', type_test: 'abc', ip: '10.0.0.1', event_type: 'alert', host: 'doom', uptime: 0, port: 1234, op_sys: 'win10', id: 21 }, { index: { _id: 2 } }, { "@timestamp": '1234567992', type_test: 'abc', event_type: 'alert', host: 'CS', uptime: 5, port: 1, op_sys: 'win10', id: 22 }, { index: { _id: 3 } }, { "@timestamp": '1234567993', type_test: 'abc', "@timestamp_pretty": '2022-12-17', event_type: 'alert', host: 'farcry', uptime: 1, port: 1234, bool: false, op_sys: 'win10', id: 23 }, { index: { _id: 4 } }, { "@timestamp": '1234567994', event_type: 'alert', host: 'GTA', uptime: 3, port: 12, op_sys: 'slack', id: 24 }, { index: { _id: 5 } }, { "@timestamp": '1234567995', event_type: 'alert', host: 'sniper 3d', uptime: 6, port: 1234, op_sys: 'fedora', id: 25 }, { index: { _id: 6 } }, { "@timestamp": '1234568996', "@timestamp_pretty": '2022-12-17', ip: '10.0.0.5', event_type: 'alert', host: 'doom', port: 65_123, bool: true, op_sys: 'redhat', id: 26 }, { index: { _id: 7 } }, { "@timestamp": '1234567997', "@timestamp_pretty": '2022-12-17', event_type: 'failure', host: 'doom', uptime: 15, port: 1234, bool: true, op_sys: 'redhat', id: 27 }, { index: { _id: 8 } }, { "@timestamp": '1234567998', ip: '10.0.0.1', event_type: 'success', host: 'doom', uptime: 16, port: 512, op_sys: 'win10', id: 28 }, { index: { _id: 9 } }, { "@timestamp": '1234567999', ip: '10.0.0.1', event_type: 'success', host: 'GTA', port: 12, bool: false, op_sys: 'win10', id: 29 } ] ) puts response response = client.bulk( index: 'my-index-000003', refresh: true, body: [ { index: { _id: 1 } }, { "@timestamp": '1334567891', host_ip: '10.0.0.1', event_type: 'alert', host: 'doom', uptime: 0, port: 12, os: 'win10', id: 31 }, { index: { _id: 2 } }, { "@timestamp": '1334567892', event_type: 'alert', host: 'CS', os: 'win10', id: 32 }, { index: { _id: 3 } }, { "@timestamp": '1334567893', event_type: 'alert', host: 'farcry', bool: true, os: 'win10', id: 33 }, { index: { _id: 4 } }, { "@timestamp": '1334567894', event_type: 'alert', host: 'GTA', os: 'slack', bool: true, id: 34 }, { index: { _id: 5 } }, { "@timestamp": '1234567895', event_type: 'alert', host: 'sniper 3d', os: 'fedora', id: 35 }, { index: { _id: 6 } }, { "@timestamp": '1234578896', host_ip: '10.0.0.1', event_type: 'alert', host: 'doom', bool: true, os: 'redhat', id: 36 }, { index: { _id: 7 } }, { "@timestamp": '1234567897', event_type: 'failure', missing_keyword: 'test', host: 'doom', bool: true, os: 'redhat', id: 37 }, { index: { _id: 8 } }, { "@timestamp": '1234577898', event_type: 'success', host: 'doom', os: 'win10', id: 38, date: '1671235200000' }, { index: { _id: 9 } }, { "@timestamp": '1234577899', host_ip: '10.0.0.5', event_type: 'success', host: 'GTA', bool: true, os: 'win10', id: 39 } ] ) puts response
const response = await client.indices.create({ index: "my-index-000001", mappings: { properties: { ip: { type: "ip", }, version: { type: "version", }, missing_keyword: { type: "keyword", }, "@timestamp": { type: "date", }, type_test: { type: "keyword", }, "@timestamp_pretty": { type: "date", format: "dd-MM-yyyy", }, event_type: { type: "keyword", }, event: { properties: { category: { type: "alias", path: "event_type", }, }, }, host: { type: "keyword", }, os: { type: "keyword", }, bool: { type: "boolean", }, uptime: { type: "long", }, port: { type: "long", }, }, }, }); console.log(response); const response1 = await client.indices.create({ index: "my-index-000002", mappings: { properties: { ip: { type: "ip", }, "@timestamp": { type: "date", }, "@timestamp_pretty": { type: "date", format: "yyyy-MM-dd", }, type_test: { type: "keyword", }, event_type: { type: "keyword", }, event: { properties: { category: { type: "alias", path: "event_type", }, }, }, host: { type: "keyword", }, op_sys: { type: "keyword", }, bool: { type: "boolean", }, uptime: { type: "long", }, port: { type: "long", }, }, }, }); console.log(response1); const response2 = await client.indices.create({ index: "my-index-000003", mappings: { properties: { host_ip: { type: "ip", }, "@timestamp": { type: "date", }, date: { type: "date", }, event_type: { type: "keyword", }, event: { properties: { category: { type: "alias", path: "event_type", }, }, }, missing_keyword: { type: "keyword", }, host: { type: "keyword", }, os: { type: "keyword", }, bool: { type: "boolean", }, uptime: { type: "long", }, port: { type: "long", }, }, }, }); console.log(response2); const response3 = await client.bulk({ index: "my-index-000001", refresh: "true", operations: [ { index: { _id: 1, }, }, { "@timestamp": "1234567891", "@timestamp_pretty": "12-12-2022", missing_keyword: "test", type_test: "abc", ip: "10.0.0.1", event_type: "alert", host: "doom", uptime: 0, port: 1234, os: "win10", version: "1.0.0", id: 11, }, { index: { _id: 2, }, }, { "@timestamp": "1234567892", "@timestamp_pretty": "13-12-2022", event_type: "alert", type_test: "abc", host: "CS", uptime: 5, port: 1, os: "win10", version: "1.2.0", id: 12, }, { index: { _id: 3, }, }, { "@timestamp": "1234567893", "@timestamp_pretty": "12-12-2022", event_type: "alert", type_test: "abc", host: "farcry", uptime: 1, port: 1234, bool: false, os: "win10", version: "2.0.0", id: 13, }, { index: { _id: 4, }, }, { "@timestamp": "1234567894", "@timestamp_pretty": "13-12-2022", event_type: "alert", type_test: "abc", host: "GTA", uptime: 3, port: 12, os: "slack", version: "10.0.0", id: 14, }, { index: { _id: 5, }, }, { "@timestamp": "1234567895", "@timestamp_pretty": "17-12-2022", event_type: "alert", host: "sniper 3d", uptime: 6, port: 1234, os: "fedora", version: "20.1.0", id: 15, }, { index: { _id: 6, }, }, { "@timestamp": "1234568896", "@timestamp_pretty": "17-12-2022", event_type: "alert", host: "doom", port: 65123, bool: true, os: "redhat", version: "20.10.0", id: 16, }, { index: { _id: 7, }, }, { "@timestamp": "1234567897", "@timestamp_pretty": "17-12-2022", missing_keyword: "yyy", event_type: "failure", host: "doom", uptime: 15, port: 1234, bool: true, os: "redhat", version: "20.2.0", id: 17, }, { index: { _id: 8, }, }, { "@timestamp": "1234567898", "@timestamp_pretty": "12-12-2022", missing_keyword: "test", event_type: "success", host: "doom", uptime: 16, port: 512, os: "win10", version: "1.2.3", id: 18, }, { index: { _id: 9, }, }, { "@timestamp": "1234567899", "@timestamp_pretty": "15-12-2022", missing_keyword: "test", event_type: "success", host: "GTA", port: 12, bool: true, os: "win10", version: "1.2.3", id: 19, }, { index: { _id: 10, }, }, { "@timestamp": "1234567893", missing_keyword: null, ip: "10.0.0.5", event_type: "alert", host: "farcry", uptime: 1, port: 1234, bool: true, os: "win10", version: "1.2.3", id: 110, }, ], }); console.log(response3); const response4 = await client.bulk({ index: "my-index-000002", refresh: "true", operations: [ { index: { _id: 1, }, }, { "@timestamp": "1234567991", type_test: "abc", ip: "10.0.0.1", event_type: "alert", host: "doom", uptime: 0, port: 1234, op_sys: "win10", id: 21, }, { index: { _id: 2, }, }, { "@timestamp": "1234567992", type_test: "abc", event_type: "alert", host: "CS", uptime: 5, port: 1, op_sys: "win10", id: 22, }, { index: { _id: 3, }, }, { "@timestamp": "1234567993", type_test: "abc", "@timestamp_pretty": "2022-12-17", event_type: "alert", host: "farcry", uptime: 1, port: 1234, bool: false, op_sys: "win10", id: 23, }, { index: { _id: 4, }, }, { "@timestamp": "1234567994", event_type: "alert", host: "GTA", uptime: 3, port: 12, op_sys: "slack", id: 24, }, { index: { _id: 5, }, }, { "@timestamp": "1234567995", event_type: "alert", host: "sniper 3d", uptime: 6, port: 1234, op_sys: "fedora", id: 25, }, { index: { _id: 6, }, }, { "@timestamp": "1234568996", "@timestamp_pretty": "2022-12-17", ip: "10.0.0.5", event_type: "alert", host: "doom", port: 65123, bool: true, op_sys: "redhat", id: 26, }, { index: { _id: 7, }, }, { "@timestamp": "1234567997", "@timestamp_pretty": "2022-12-17", event_type: "failure", host: "doom", uptime: 15, port: 1234, bool: true, op_sys: "redhat", id: 27, }, { index: { _id: 8, }, }, { "@timestamp": "1234567998", ip: "10.0.0.1", event_type: "success", host: "doom", uptime: 16, port: 512, op_sys: "win10", id: 28, }, { index: { _id: 9, }, }, { "@timestamp": "1234567999", ip: "10.0.0.1", event_type: "success", host: "GTA", port: 12, bool: false, op_sys: "win10", id: 29, }, ], }); console.log(response4); const response5 = await client.bulk({ index: "my-index-000003", refresh: "true", operations: [ { index: { _id: 1, }, }, { "@timestamp": "1334567891", host_ip: "10.0.0.1", event_type: "alert", host: "doom", uptime: 0, port: 12, os: "win10", id: 31, }, { index: { _id: 2, }, }, { "@timestamp": "1334567892", event_type: "alert", host: "CS", os: "win10", id: 32, }, { index: { _id: 3, }, }, { "@timestamp": "1334567893", event_type: "alert", host: "farcry", bool: true, os: "win10", id: 33, }, { index: { _id: 4, }, }, { "@timestamp": "1334567894", event_type: "alert", host: "GTA", os: "slack", bool: true, id: 34, }, { index: { _id: 5, }, }, { "@timestamp": "1234567895", event_type: "alert", host: "sniper 3d", os: "fedora", id: 35, }, { index: { _id: 6, }, }, { "@timestamp": "1234578896", host_ip: "10.0.0.1", event_type: "alert", host: "doom", bool: true, os: "redhat", id: 36, }, { index: { _id: 7, }, }, { "@timestamp": "1234567897", event_type: "failure", missing_keyword: "test", host: "doom", bool: true, os: "redhat", id: 37, }, { index: { _id: 8, }, }, { "@timestamp": "1234577898", event_type: "success", host: "doom", os: "win10", id: 38, date: "1671235200000", }, { index: { _id: 9, }, }, { "@timestamp": "1234577899", host_ip: "10.0.0.5", event_type: "success", host: "GTA", bool: true, os: "win10", id: 39, }, ], }); console.log(response5);
PUT /my-index-000001 { "mappings": { "properties": { "ip": { "type":"ip" }, "version": { "type": "version" }, "missing_keyword": { "type": "keyword" }, "@timestamp": { "type": "date" }, "type_test": { "type": "keyword" }, "@timestamp_pretty": { "type": "date", "format": "dd-MM-yyyy" }, "event_type": { "type": "keyword" }, "event": { "properties": { "category": { "type": "alias", "path": "event_type" } } }, "host": { "type": "keyword" }, "os": { "type": "keyword" }, "bool": { "type": "boolean" }, "uptime" : { "type" : "long" }, "port" : { "type" : "long" } } } } PUT /my-index-000002 { "mappings": { "properties": { "ip": { "type":"ip" }, "@timestamp": { "type": "date" }, "@timestamp_pretty": { "type": "date", "format": "yyyy-MM-dd" }, "type_test": { "type": "keyword" }, "event_type": { "type": "keyword" }, "event": { "properties": { "category": { "type": "alias", "path": "event_type" } } }, "host": { "type": "keyword" }, "op_sys": { "type": "keyword" }, "bool": { "type": "boolean" }, "uptime" : { "type" : "long" }, "port" : { "type" : "long" } } } } PUT /my-index-000003 { "mappings": { "properties": { "host_ip": { "type":"ip" }, "@timestamp": { "type": "date" }, "date": { "type": "date" }, "event_type": { "type": "keyword" }, "event": { "properties": { "category": { "type": "alias", "path": "event_type" } } }, "missing_keyword": { "type": "keyword" }, "host": { "type": "keyword" }, "os": { "type": "keyword" }, "bool": { "type": "boolean" }, "uptime" : { "type" : "long" }, "port" : { "type" : "long" } } } } POST /my-index-000001/_bulk?refresh {"index":{"_id":1}} {"@timestamp":"1234567891","@timestamp_pretty":"12-12-2022","missing_keyword":"test","type_test":"abc","ip":"10.0.0.1","event_type":"alert","host":"doom","uptime":0,"port":1234,"os":"win10","version":"1.0.0","id":11} {"index":{"_id":2}} {"@timestamp":"1234567892","@timestamp_pretty":"13-12-2022","event_type":"alert","type_test":"abc","host":"CS","uptime":5,"port":1,"os":"win10","version":"1.2.0","id":12} {"index":{"_id":3}} {"@timestamp":"1234567893","@timestamp_pretty":"12-12-2022","event_type":"alert","type_test":"abc","host":"farcry","uptime":1,"port":1234,"bool":false,"os":"win10","version":"2.0.0","id":13} {"index":{"_id":4}} {"@timestamp":"1234567894","@timestamp_pretty":"13-12-2022","event_type":"alert","type_test":"abc","host":"GTA","uptime":3,"port":12,"os":"slack","version":"10.0.0","id":14} {"index":{"_id":5}} {"@timestamp":"1234567895","@timestamp_pretty":"17-12-2022","event_type":"alert","host":"sniper 3d","uptime":6,"port":1234,"os":"fedora","version":"20.1.0","id":15} {"index":{"_id":6}} {"@timestamp":"1234568896","@timestamp_pretty":"17-12-2022","event_type":"alert","host":"doom","port":65123,"bool":true,"os":"redhat","version":"20.10.0","id":16} {"index":{"_id":7}} {"@timestamp":"1234567897","@timestamp_pretty":"17-12-2022","missing_keyword":"yyy","event_type":"failure","host":"doom","uptime":15,"port":1234,"bool":true,"os":"redhat","version":"20.2.0","id":17} {"index":{"_id":8}} {"@timestamp":"1234567898","@timestamp_pretty":"12-12-2022","missing_keyword":"test","event_type":"success","host":"doom","uptime":16,"port":512,"os":"win10","version":"1.2.3","id":18} {"index":{"_id":9}} {"@timestamp":"1234567899","@timestamp_pretty":"15-12-2022","missing_keyword":"test","event_type":"success","host":"GTA","port":12,"bool":true,"os":"win10","version":"1.2.3","id":19} {"index":{"_id":10}} {"@timestamp":"1234567893","missing_keyword":null,"ip":"10.0.0.5","event_type":"alert","host":"farcry","uptime":1,"port":1234,"bool":true,"os":"win10","version":"1.2.3","id":110} POST /my-index-000002/_bulk?refresh {"index":{"_id":1}} {"@timestamp":"1234567991","type_test":"abc","ip":"10.0.0.1","event_type":"alert","host":"doom","uptime":0,"port":1234,"op_sys":"win10","id":21} {"index":{"_id":2}} {"@timestamp":"1234567992","type_test":"abc","event_type":"alert","host":"CS","uptime":5,"port":1,"op_sys":"win10","id":22} {"index":{"_id":3}} {"@timestamp":"1234567993","type_test":"abc","@timestamp_pretty":"2022-12-17","event_type":"alert","host":"farcry","uptime":1,"port":1234,"bool":false,"op_sys":"win10","id":23} {"index":{"_id":4}} {"@timestamp":"1234567994","event_type":"alert","host":"GTA","uptime":3,"port":12,"op_sys":"slack","id":24} {"index":{"_id":5}} {"@timestamp":"1234567995","event_type":"alert","host":"sniper 3d","uptime":6,"port":1234,"op_sys":"fedora","id":25} {"index":{"_id":6}} {"@timestamp":"1234568996","@timestamp_pretty":"2022-12-17","ip":"10.0.0.5","event_type":"alert","host":"doom","port":65123,"bool":true,"op_sys":"redhat","id":26} {"index":{"_id":7}} {"@timestamp":"1234567997","@timestamp_pretty":"2022-12-17","event_type":"failure","host":"doom","uptime":15,"port":1234,"bool":true,"op_sys":"redhat","id":27} {"index":{"_id":8}} {"@timestamp":"1234567998","ip":"10.0.0.1","event_type":"success","host":"doom","uptime":16,"port":512,"op_sys":"win10","id":28} {"index":{"_id":9}} {"@timestamp":"1234567999","ip":"10.0.0.1","event_type":"success","host":"GTA","port":12,"bool":false,"op_sys":"win10","id":29} POST /my-index-000003/_bulk?refresh {"index":{"_id":1}} {"@timestamp":"1334567891","host_ip":"10.0.0.1","event_type":"alert","host":"doom","uptime":0,"port":12,"os":"win10","id":31} {"index":{"_id":2}} {"@timestamp":"1334567892","event_type":"alert","host":"CS","os":"win10","id":32} {"index":{"_id":3}} {"@timestamp":"1334567893","event_type":"alert","host":"farcry","bool":true,"os":"win10","id":33} {"index":{"_id":4}} {"@timestamp":"1334567894","event_type":"alert","host":"GTA","os":"slack","bool":true,"id":34} {"index":{"_id":5}} {"@timestamp":"1234567895","event_type":"alert","host":"sniper 3d","os":"fedora","id":35} {"index":{"_id":6}} {"@timestamp":"1234578896","host_ip":"10.0.0.1","event_type":"alert","host":"doom","bool":true,"os":"redhat","id":36} {"index":{"_id":7}} {"@timestamp":"1234567897","event_type":"failure","missing_keyword":"test","host":"doom","bool":true,"os":"redhat","id":37} {"index":{"_id":8}} {"@timestamp":"1234577898","event_type":"success","host":"doom","os":"win10","id":38,"date":"1671235200000"} {"index":{"_id":9}} {"@timestamp":"1234577899","host_ip":"10.0.0.5","event_type":"success","host":"GTA","bool":true,"os":"win10","id":39}
A sample query specifies at least one join key, using the by
keyword, and up to five filters:
resp = client.eql.search( index="my-index*", query="\n sample by host\n [any where uptime > 0]\n [any where port > 100]\n [any where bool == true]\n ", ) print(resp)
const response = await client.eql.search({ index: "my-index*", query: "\n sample by host\n [any where uptime > 0]\n [any where port > 100]\n [any where bool == true]\n ", }); console.log(response);
GET /my-index*/_eql/search { "query": """ sample by host [any where uptime > 0] [any where port > 100] [any where bool == true] """ }
By default, the response’s hits.sequences
property contains up to 10 samples.
Each sample has a set of join_keys
and an array with one matching event for
each of the filters. Events are returned in the order of the filters they match:
{ ... "hits": { "total": { "value": 2, "relation": "eq" }, "sequences": [ { "join_keys": [ "doom" ], "events": [ { "_index": "my-index-000001", "_id": "7", "_source": { "@timestamp": "1234567897", "@timestamp_pretty": "17-12-2022", "missing_keyword": "yyy", "event_type": "failure", "host": "doom", "uptime": 15, "port": 1234, "bool": true, "os": "redhat", "version": "20.2.0", "id": 17 } }, { "_index": "my-index-000001", "_id": "1", "_source": { "@timestamp": "1234567891", "@timestamp_pretty": "12-12-2022", "missing_keyword": "test", "type_test": "abc", "ip": "10.0.0.1", "event_type": "alert", "host": "doom", "uptime": 0, "port": 1234, "os": "win10", "version": "1.0.0", "id": 11 } }, { "_index": "my-index-000001", "_id": "6", "_source": { "@timestamp": "1234568896", "@timestamp_pretty": "17-12-2022", "event_type": "alert", "host": "doom", "port": 65123, "bool": true, "os": "redhat", "version": "20.10.0", "id": 16 } } ] }, { "join_keys": [ "farcry" ], "events": [ { "_index": "my-index-000001", "_id": "3", "_source": { "@timestamp": "1234567893", "@timestamp_pretty": "12-12-2022", "event_type": "alert", "type_test": "abc", "host": "farcry", "uptime": 1, "port": 1234, "bool": false, "os": "win10", "version": "2.0.0", "id": 13 } }, { "_index": "my-index-000001", "_id": "10", "_source": { "@timestamp": "1234567893", "missing_keyword": null, "ip": "10.0.0.5", "event_type": "alert", "host": "farcry", "uptime": 1, "port": 1234, "bool": true, "os": "win10", "version": "1.2.3", "id": 110 } }, { "_index": "my-index-000003", "_id": "3", "_source": { "@timestamp": "1334567893", "event_type": "alert", "host": "farcry", "bool": true, "os": "win10", "id": 33 } } ] } ] } }
The events in the first sample have a value of |
|
This event matches the first filter. |
|
This event matches the second filter. |
|
This event matches the third filter. |
|
The events in the second sample have a value of |
You can specify multiple join keys:
resp = client.eql.search( index="my-index*", query="\n sample by host\n [any where uptime > 0] by os\n [any where port > 100] by op_sys\n [any where bool == true] by os\n ", ) print(resp)
const response = await client.eql.search({ index: "my-index*", query: "\n sample by host\n [any where uptime > 0] by os\n [any where port > 100] by op_sys\n [any where bool == true] by os\n ", }); console.log(response);
GET /my-index*/_eql/search { "query": """ sample by host [any where uptime > 0] by os [any where port > 100] by op_sys [any where bool == true] by os """ }
This query will return samples where each of the events shares the same value
for os
or op_sys
, as well as for host
. For example:
{ ... "hits": { "total": { "value": 2, "relation": "eq" }, "sequences": [ { "join_keys": [ "doom", "redhat" ], "events": [ { "_index": "my-index-000001", "_id": "7", "_source": { "@timestamp": "1234567897", "@timestamp_pretty": "17-12-2022", "missing_keyword": "yyy", "event_type": "failure", "host": "doom", "uptime": 15, "port": 1234, "bool": true, "os": "redhat", "version": "20.2.0", "id": 17 } }, { "_index": "my-index-000002", "_id": "6", "_source": { "@timestamp": "1234568996", "@timestamp_pretty": "2022-12-17", "ip": "10.0.0.5", "event_type": "alert", "host": "doom", "port": 65123, "bool": true, "op_sys": "redhat", "id": 26 } }, { "_index": "my-index-000001", "_id": "6", "_source": { "@timestamp": "1234568896", "@timestamp_pretty": "17-12-2022", "event_type": "alert", "host": "doom", "port": 65123, "bool": true, "os": "redhat", "version": "20.10.0", "id": 16 } } ] }, { "join_keys": [ "farcry", "win10" ], "events": [ { "_index": "my-index-000001", "_id": "3", "_source": { "@timestamp": "1234567893", "@timestamp_pretty": "12-12-2022", "event_type": "alert", "type_test": "abc", "host": "farcry", "uptime": 1, "port": 1234, "bool": false, "os": "win10", "version": "2.0.0", "id": 13 } }, { "_index": "my-index-000002", "_id": "3", "_source": { "@timestamp": "1234567993", "type_test": "abc", "@timestamp_pretty": "2022-12-17", "event_type": "alert", "host": "farcry", "uptime": 1, "port": 1234, "bool": false, "op_sys": "win10", "id": 23 } }, { "_index": "my-index-000001", "_id": "10", "_source": { "@timestamp": "1234567893", "missing_keyword": null, "ip": "10.0.0.5", "event_type": "alert", "host": "farcry", "uptime": 1, "port": 1234, "bool": true, "os": "win10", "version": "1.2.3", "id": 110 } } ] } ] } }
By default, the response of a sample query contains up to 10 samples, with one
sample per unique set of join keys. Use the size
parameter to get a smaller or
larger set of samples. To retrieve more than one sample per set of join keys,
use the max_samples_per_key
parameter. Pipes are not supported for sample
queries.
resp = client.eql.search( index="my-index*", max_samples_per_key=2, size=20, query="\n sample\n [any where uptime > 0] by host,os\n [any where port > 100] by host,op_sys\n [any where bool == true] by host,os\n ", ) print(resp)
const response = await client.eql.search({ index: "my-index*", max_samples_per_key: 2, size: 20, query: "\n sample\n [any where uptime > 0] by host,os\n [any where port > 100] by host,op_sys\n [any where bool == true] by host,os\n ", }); console.log(response);
GET /my-index*/_eql/search { "max_samples_per_key": 2, "size": 20, "query": """ sample [any where uptime > 0] by host,os [any where port > 100] by host,op_sys [any where bool == true] by host,os """ }
Retrieve selected fields
editBy default, each hit in the search response includes the document _source
,
which is the entire JSON object that was provided when indexing the document.
You can use the filter_path
query
parameter to filter the API response. For example, the following search returns
only the timestamp and PID from the _source
of each matching event.
resp = client.eql.search( index="my-data-stream", filter_path="hits.events._source.@timestamp,hits.events._source.process.pid", query="\n process where process.name == \"regsvr32.exe\"\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", filter_path: "hits.events._source.@timestamp,hits.events._source.process.pid", query: '\n process where process.name == "regsvr32.exe"\n ', }); console.log(response);
GET /my-data-stream/_eql/search?filter_path=hits.events._source.@timestamp,hits.events._source.process.pid { "query": """ process where process.name == "regsvr32.exe" """ }
The API returns the following response.
{ "hits": { "events": [ { "_source": { "@timestamp": "2099-12-07T11:07:09.000Z", "process": { "pid": 2012 } } }, { "_source": { "@timestamp": "2099-12-07T11:07:10.000Z", "process": { "pid": 2012 } } } ] } }
You can also use the fields
parameter to retrieve and format specific fields
in the response. This field is identical to the search API’s
fields
parameter.
Because it consults the index mappings, the fields
parameter provides several
advantages over referencing the _source
directly. Specifically, the fields
parameter:
- Returns each value in a standardized way that matches its mapping type
- Accepts multi-fields and field aliases
- Formats dates and spatial data types
- Retrieves runtime field values
- Returns fields calculated by a script at index time
- Returns fields from related indices using lookup runtime fields
The following search request uses the fields
parameter to retrieve values for
the event.type
field, all fields starting with process.
, and the
@timestamp
field. The request also uses the filter_path
query parameter to
exclude the _source
of each hit.
resp = client.eql.search( index="my-data-stream", filter_path="-hits.events._source", query="\n process where process.name == \"regsvr32.exe\"\n ", fields=[ "event.type", "process.*", { "field": "@timestamp", "format": "epoch_millis" } ], ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", filter_path: "-hits.events._source", query: '\n process where process.name == "regsvr32.exe"\n ', fields: [ "event.type", "process.*", { field: "@timestamp", format: "epoch_millis", }, ], }); console.log(response);
GET /my-data-stream/_eql/search?filter_path=-hits.events._source { "query": """ process where process.name == "regsvr32.exe" """, "fields": [ "event.type", "process.*", { "field": "@timestamp", "format": "epoch_millis" } ] }
Both full field names and wildcard patterns are accepted. |
|
Use the |
The response includes values as a flat list in the fields
section for each
hit.
{ ... "hits": { "total": ..., "events": [ { "_index": ".ds-my-data-stream-2099.12.07-000001", "_id": "OQmfCaduce8zoHT93o4H", "fields": { "process.name": [ "regsvr32.exe" ], "process.name.keyword": [ "regsvr32.exe" ], "@timestamp": [ "4100324829000" ], "process.command_line": [ "regsvr32.exe /s /u /i:https://...RegSvr32.sct scrobj.dll" ], "process.command_line.keyword": [ "regsvr32.exe /s /u /i:https://...RegSvr32.sct scrobj.dll" ], "process.executable.keyword": [ "C:\\Windows\\System32\\regsvr32.exe" ], "process.pid": [ 2012 ], "process.executable": [ "C:\\Windows\\System32\\regsvr32.exe" ] } }, .... ] } }
Use runtime fields
editUse the runtime_mappings
parameter to extract and create runtime
fields during a search. Use the fields
parameter to include runtime fields
in the response.
The following search creates a day_of_week
runtime field from the @timestamp
and returns it in the response.
resp = client.eql.search( index="my-data-stream", filter_path="-hits.events._source", runtime_mappings={ "day_of_week": { "type": "keyword", "script": "emit(doc['@timestamp'].value.dayOfWeekEnum.toString())" } }, query="\n process where process.name == \"regsvr32.exe\"\n ", fields=[ "@timestamp", "day_of_week" ], ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", filter_path: "-hits.events._source", runtime_mappings: { day_of_week: { type: "keyword", script: "emit(doc['@timestamp'].value.dayOfWeekEnum.toString())", }, }, query: '\n process where process.name == "regsvr32.exe"\n ', fields: ["@timestamp", "day_of_week"], }); console.log(response);
GET /my-data-stream/_eql/search?filter_path=-hits.events._source { "runtime_mappings": { "day_of_week": { "type": "keyword", "script": "emit(doc['@timestamp'].value.dayOfWeekEnum.toString())" } }, "query": """ process where process.name == "regsvr32.exe" """, "fields": [ "@timestamp", "day_of_week" ] }
The API returns:
{ ... "hits": { "total": ..., "events": [ { "_index": ".ds-my-data-stream-2099.12.07-000001", "_id": "OQmfCaduce8zoHT93o4H", "fields": { "@timestamp": [ "2099-12-07T11:07:09.000Z" ], "day_of_week": [ "MONDAY" ] } }, .... ] } }
Specify a timestamp or event category field
editThe EQL search API uses the @timestamp
and event.category
fields from the
ECS by default. To specify different fields, use the
timestamp_field
and event_category_field
parameters:
resp = client.eql.search( index="my-data-stream", timestamp_field="file.accessed", event_category_field="file.type", query="\n file where (file.size > 1 and file.type == \"file\")\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", timestamp_field: "file.accessed", event_category_field: "file.type", query: '\n file where (file.size > 1 and file.type == "file")\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "timestamp_field": "file.accessed", "event_category_field": "file.type", "query": """ file where (file.size > 1 and file.type == "file") """ }
The event category field must be mapped as a keyword
family field
type. The timestamp field should be mapped as a date
field type.
date_nanos
timestamp fields are not supported. You cannot use a
nested
field or the sub-fields of a nested
field as the timestamp
or event category field.
Specify a sort tiebreaker
editBy default, the EQL search API returns matching hits by timestamp. If two or more events share the same timestamp, Elasticsearch uses a tiebreaker field value to sort the events in ascending order. Elasticsearch orders events with no tiebreaker value after events with a value.
If you don’t specify a tiebreaker field or the events also share the same tiebreaker value, Elasticsearch considers the events concurrent and may not return them in a consistent sort order.
To specify a tiebreaker field, use the tiebreaker_field
parameter. If you use
the ECS, we recommend using event.sequence
as the tiebreaker field.
resp = client.eql.search( index="my-data-stream", tiebreaker_field="event.sequence", query="\n process where process.name == \"cmd.exe\" and stringContains(process.executable, \"System32\")\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", tiebreaker_field: "event.sequence", query: '\n process where process.name == "cmd.exe" and stringContains(process.executable, "System32")\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "tiebreaker_field": "event.sequence", "query": """ process where process.name == "cmd.exe" and stringContains(process.executable, "System32") """ }
Filter using Query DSL
editThe filter
parameter uses Query DSL to limit the documents on
which an EQL query runs.
resp = client.eql.search( index="my-data-stream", filter={ "range": { "@timestamp": { "gte": "now-1d/d", "lt": "now/d" } } }, query="\n file where (file.type == \"file\" and file.name == \"cmd.exe\")\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", filter: { range: { "@timestamp": { gte: "now-1d/d", lt: "now/d", }, }, }, query: '\n file where (file.type == "file" and file.name == "cmd.exe")\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "filter": { "range": { "@timestamp": { "gte": "now-1d/d", "lt": "now/d" } } }, "query": """ file where (file.type == "file" and file.name == "cmd.exe") """ }
Run an async EQL search
editBy default, EQL search requests are synchronous and wait for complete results before returning a response. However, complete results can take longer for searches across large data sets or frozen data.
To avoid long waits, run an async EQL search. Set wait_for_completion_timeout
to a duration you’d like to wait for synchronous results.
resp = client.eql.search( index="my-data-stream", wait_for_completion_timeout="2s", query="\n process where process.name == \"cmd.exe\"\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", wait_for_completion_timeout: "2s", query: '\n process where process.name == "cmd.exe"\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "wait_for_completion_timeout": "2s", "query": """ process where process.name == "cmd.exe" """ }
If the request doesn’t finish within the timeout period, the search becomes async and returns a response that includes:
- A search ID
-
An
is_partial
value oftrue
, indicating the search results are incomplete -
An
is_running
value oftrue
, indicating the search is ongoing
The async search continues to run in the background without blocking other requests.
{ "id": "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", "is_partial": true, "is_running": true, "took": 2000, "timed_out": false, "hits": ... }
To check the progress of an async search, use the get
async EQL search API with the search ID. Specify how long you’d like for
complete results in the wait_for_completion_timeout
parameter.
resp = client.eql.get( id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", wait_for_completion_timeout="2s", ) print(resp)
response = client.eql.get( id: 'FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=', wait_for_completion_timeout: '2s' ) puts response
const response = await client.eql.get({ id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", wait_for_completion_timeout: "2s", }); console.log(response);
GET /_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=2s
If the response’s is_running
value is false
, the async search has finished.
If the is_partial
value is false
, the returned search results are
complete.
{ "id": "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", "is_partial": false, "is_running": false, "took": 2000, "timed_out": false, "hits": ... }
Another more lightweight way to check the progress of an async search is to use the get async EQL status API with the search ID.
resp = client.eql.get_status( id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", ) print(resp)
response = client.eql.get_status( id: 'FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=' ) puts response
const response = await client.eql.getStatus({ id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", }); console.log(response);
GET /_eql/search/status/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=
{ "id": "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", "is_running": false, "is_partial": false, "expiration_time_in_millis": 1611690295000, "completion_status": 200 }
Change the search retention period
editBy default, the EQL search API stores async searches for five days. After this
period, any searches and their results are deleted. Use the keep_alive
parameter to change this retention period:
resp = client.eql.search( index="my-data-stream", keep_alive="2d", wait_for_completion_timeout="2s", query="\n process where process.name == \"cmd.exe\"\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", keep_alive: "2d", wait_for_completion_timeout: "2s", query: '\n process where process.name == "cmd.exe"\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "keep_alive": "2d", "wait_for_completion_timeout": "2s", "query": """ process where process.name == "cmd.exe" """ }
You can use the get async EQL search API's
keep_alive
parameter to later change the retention period. The new retention
period starts after the get request runs.
resp = client.eql.get( id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", keep_alive="5d", ) print(resp)
response = client.eql.get( id: 'FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=', keep_alive: '5d' ) puts response
const response = await client.eql.get({ id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", keep_alive: "5d", }); console.log(response);
GET /_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?keep_alive=5d
Use the delete async EQL search API to
manually delete an async EQL search before the keep_alive
period ends. If the
search is still ongoing, Elasticsearch cancels the search request.
resp = client.eql.delete( id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", ) print(resp)
response = client.eql.delete( id: 'FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=' ) puts response
const response = await client.eql.delete({ id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=", }); console.log(response);
DELETE /_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=
Store synchronous EQL searches
editBy default, the EQL search API only stores async searches. To save a synchronous
search, set keep_on_completion
to true
:
resp = client.eql.search( index="my-data-stream", keep_on_completion=True, wait_for_completion_timeout="2s", query="\n process where process.name == \"cmd.exe\"\n ", ) print(resp)
const response = await client.eql.search({ index: "my-data-stream", keep_on_completion: true, wait_for_completion_timeout: "2s", query: '\n process where process.name == "cmd.exe"\n ', }); console.log(response);
GET /my-data-stream/_eql/search { "keep_on_completion": true, "wait_for_completion_timeout": "2s", "query": """ process where process.name == "cmd.exe" """ }
The response includes a search ID. is_partial
and is_running
are false
,
indicating the EQL search was synchronous and returned complete results.
{ "id": "FjlmbndxNmJjU0RPdExBTGg0elNOOEEaQk9xSjJBQzBRMldZa1VVQ2pPa01YUToxMDY=", "is_partial": false, "is_running": false, "took": 52, "timed_out": false, "hits": ... }
Use the get async EQL search API to get the same results later:
resp = client.eql.get( id="FjlmbndxNmJjU0RPdExBTGg0elNOOEEaQk9xSjJBQzBRMldZa1VVQ2pPa01YUToxMDY=", ) print(resp)
response = client.eql.get( id: 'FjlmbndxNmJjU0RPdExBTGg0elNOOEEaQk9xSjJBQzBRMldZa1VVQ2pPa01YUToxMDY=' ) puts response
const response = await client.eql.get({ id: "FjlmbndxNmJjU0RPdExBTGg0elNOOEEaQk9xSjJBQzBRMldZa1VVQ2pPa01YUToxMDY=", }); console.log(response);
GET /_eql/search/FjlmbndxNmJjU0RPdExBTGg0elNOOEEaQk9xSjJBQzBRMldZa1VVQ2pPa01YUToxMDY=
Saved synchronous searches are still subject to the keep_alive
parameter’s
retention period. When this period ends, the search and its results are deleted.
You can also check only the status of the saved synchronous search without results by using get async EQL status API.
You can also manually delete saved synchronous searches using the delete async EQL search API.
Run an EQL search across clusters
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
The EQL search API supports cross-cluster search. However, the local and remote clusters must use the same Elasticsearch version if they have versions prior to 7.17.7 (included) or prior to 8.5.1 (included).
The following cluster update settings request
adds two remote clusters: cluster_one
and cluster_two
.
resp = client.cluster.put_settings( persistent={ "cluster": { "remote": { "cluster_one": { "seeds": [ "127.0.0.1:9300" ] }, "cluster_two": { "seeds": [ "127.0.0.1:9301" ] } } } }, ) print(resp)
response = client.cluster.put_settings( body: { persistent: { cluster: { remote: { cluster_one: { seeds: [ '127.0.0.1:9300' ] }, cluster_two: { seeds: [ '127.0.0.1:9301' ] } } } } } ) puts response
const response = await client.cluster.putSettings({ persistent: { cluster: { remote: { cluster_one: { seeds: ["127.0.0.1:9300"], }, cluster_two: { seeds: ["127.0.0.1:9301"], }, }, }, }, }); console.log(response);
PUT /_cluster/settings { "persistent": { "cluster": { "remote": { "cluster_one": { "seeds": [ "127.0.0.1:9300" ] }, "cluster_two": { "seeds": [ "127.0.0.1:9301" ] } } } } }
To target a data stream or index on a remote cluster, use the
<cluster>:<target>
syntax.
resp = client.eql.search( index="cluster_one:my-data-stream,cluster_two:my-data-stream", query="\n process where process.name == \"regsvr32.exe\"\n ", ) print(resp)
const response = await client.eql.search({ index: "cluster_one:my-data-stream,cluster_two:my-data-stream", query: '\n process where process.name == "regsvr32.exe"\n ', }); console.log(response);
GET /cluster_one:my-data-stream,cluster_two:my-data-stream/_eql/search { "query": """ process where process.name == "regsvr32.exe" """ }
EQL circuit breaker settings
editThe relevant circuit breaker settings can be found in the Circuit Breakers page.
On this page
- Advantages of EQL
- Required fields
- Run an EQL search
- Search for a sequence of events
- Sample chronologically unordered events
- Retrieve selected fields
- Use runtime fields
- Specify a timestamp or event category field
- Specify a sort tiebreaker
- Filter using Query DSL
- Run an async EQL search
- Change the search retention period
- Store synchronous EQL searches
- Run an EQL search across clusters
- EQL circuit breaker settings