- Elasticsearch Guide: other versions:
- What’s new in 8.17
- Elasticsearch basics
- Quick starts
- Set up Elasticsearch
- Run Elasticsearch locally
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Miscellaneous cluster settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Data stream lifecycle settings
- Field data cache settings
- Local gateway settings
- Health Diagnostic settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- Inference settings
- License settings
- Machine learning settings
- Monitoring settings
- Node settings
- Networking
- Node query cache settings
- Path settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Set JVM options
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Dynamic mapping
- Explicit mapping
- Runtime fields
- Field data types
- Aggregate metric
- Alias
- Arrays
- Binary
- Boolean
- Completion
- Date
- Date nanoseconds
- Dense vector
- Flattened
- Geopoint
- Geoshape
- Histogram
- IP
- Join
- Keyword
- Nested
- Numeric
- Object
- Pass-through object
- Percolator
- Point
- Range
- Rank feature
- Rank features
- Search-as-you-type
- Semantic text
- Shape
- Sparse vector
- Text
- Token count
- Unsigned long
- Version
- Metadata fields
- Mapping parameters
analyzer
coerce
copy_to
doc_values
dynamic
eager_global_ordinals
enabled
format
ignore_above
index.mapping.ignore_above
ignore_malformed
index
index_options
index_phrases
index_prefixes
meta
fields
normalizer
norms
null_value
position_increment_gap
properties
search_analyzer
similarity
store
subobjects
term_vector
- Mapping limit settings
- Removal of mapping types
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Attachment
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- Geo-grid
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- IP Location
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Redact
- Registered domain
- Remove
- Rename
- Reroute
- Script
- Set
- Set security user
- Sort
- Split
- Terminate
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Ingest pipelines in Search
- Aliases
- Search your data
- Re-ranking
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Frequent item sets
- Geo-distance
- Geohash grid
- Geohex grid
- Geotile grid
- Global
- Histogram
- IP prefix
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Random sampler
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Time series
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Change point
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- Geospatial analysis
- Connectors
- EQL
- ES|QL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Tutorial: Customize built-in policies
- Tutorial: Automate rollover
- Index management in Kibana
- Overview
- Concepts
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Data tiers
- Autoscaling
- Monitor a cluster
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled automatically
- Manually configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- User profiles
- Realms
- Realm chains
- Security domains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- JWT authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Looking up users without authentication
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Role restriction
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watcher
- Cross-cluster replication
- Data store architecture
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Behavioral Analytics APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat component templates
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Prevalidate node removal
- Nodes reload secure settings
- Nodes stats
- Cluster Info
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Create or update desired nodes
- Get desired nodes
- Delete desired nodes
- Get desired balance
- Reset desired balance
- Cross-cluster replication APIs
- Connector APIs
- Create connector
- Delete connector
- Get connector
- List connectors
- Update connector API key id
- Update connector configuration
- Update connector index name
- Update connector features
- Update connector filtering
- Update connector name and description
- Update connector pipeline
- Update connector scheduling
- Update connector service type
- Create connector sync job
- Cancel connector sync job
- Delete connector sync job
- Get connector sync job
- List connector sync jobs
- Check in a connector
- Update connector error
- Update connector last sync stats
- Update connector status
- Check in connector sync job
- Claim connector sync job
- Set connector sync job error
- Set connector sync job stats
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- ES|QL APIs
- Features APIs
- Fleet APIs
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Resolve cluster
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Inference APIs
- Delete inference API
- Get inference API
- Perform inference API
- Create inference API
- Stream inference API
- Update inference API
- AlibabaCloud AI Search inference service
- Amazon Bedrock inference service
- Anthropic inference service
- Azure AI studio inference service
- Azure OpenAI inference service
- Cohere inference service
- Elasticsearch inference service
- ELSER inference service
- Google AI Studio inference service
- Google Vertex AI inference service
- HuggingFace inference service
- Mistral inference service
- OpenAI inference service
- Watsonx inference service
- Info API
- Ingest APIs
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Clear trained model deployment cache
- Create or update trained model aliases
- Create part of a trained model
- Create trained models
- Create trained model vocabulary
- Delete trained model aliases
- Delete trained models
- Get trained models
- Get trained models stats
- Infer trained model
- Start trained model deployment
- Stop trained model deployment
- Update trained model deployment
- Migration APIs
- Node lifecycle APIs
- Query rules APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Root API
- Script APIs
- Search APIs
- Search Application APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Bulk create or update roles API
- Bulk delete roles API
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Query Role
- Get service accounts
- Get service account credentials
- Get Security settings
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- Query User
- Update API key
- Update Security settings
- Bulk update API keys
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Activate user profile
- Disable user profile
- Enable user profile
- Get user profiles
- Suggest user profile
- Update user profile data
- Has privileges user profile
- Create Cross-Cluster API key
- Update Cross-Cluster API key
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Synonyms APIs
- Text structure APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- Optimizations
- Troubleshooting
- Fix common cluster issues
- Diagnose unassigned shards
- Add a missing tier to the system
- Allow Elasticsearch to allocate the data in the system
- Allow Elasticsearch to allocate the index
- Indices mix index allocation filters with data tiers node roles to move through data tiers
- Not enough nodes to allocate all shard replicas
- Total number of shards for an index on a single node exceeded
- Total number of shards per node has been reached
- Troubleshooting corruption
- Fix data nodes out of disk
- Fix master nodes out of disk
- Fix other role nodes out of disk
- Start index lifecycle management
- Start Snapshot Lifecycle Management
- Restore from snapshot
- Troubleshooting broken repositories
- Addressing repeated snapshot policy failures
- Troubleshooting an unstable cluster
- Troubleshooting discovery
- Troubleshooting monitoring
- Troubleshooting transforms
- Troubleshooting Watcher
- Troubleshooting searches
- Troubleshooting shards capacity health issues
- Troubleshooting an unbalanced cluster
- Capture diagnostics
- Migration guide
- Release notes
- Elasticsearch version 8.17.1
- Elasticsearch version 8.17.0
- Elasticsearch version 8.16.2
- Elasticsearch version 8.16.1
- Elasticsearch version 8.16.0
- Elasticsearch version 8.15.5
- Elasticsearch version 8.15.4
- Elasticsearch version 8.15.3
- Elasticsearch version 8.15.2
- Elasticsearch version 8.15.1
- Elasticsearch version 8.15.0
- Elasticsearch version 8.14.3
- Elasticsearch version 8.14.2
- Elasticsearch version 8.14.1
- Elasticsearch version 8.14.0
- Elasticsearch version 8.13.4
- Elasticsearch version 8.13.3
- Elasticsearch version 8.13.2
- Elasticsearch version 8.13.1
- Elasticsearch version 8.13.0
- Elasticsearch version 8.12.2
- Elasticsearch version 8.12.1
- Elasticsearch version 8.12.0
- Elasticsearch version 8.11.4
- Elasticsearch version 8.11.3
- Elasticsearch version 8.11.2
- Elasticsearch version 8.11.1
- Elasticsearch version 8.11.0
- Elasticsearch version 8.10.4
- Elasticsearch version 8.10.3
- Elasticsearch version 8.10.2
- Elasticsearch version 8.10.1
- Elasticsearch version 8.10.0
- Elasticsearch version 8.9.2
- Elasticsearch version 8.9.1
- Elasticsearch version 8.9.0
- Elasticsearch version 8.8.2
- Elasticsearch version 8.8.1
- Elasticsearch version 8.8.0
- Elasticsearch version 8.7.1
- Elasticsearch version 8.7.0
- Elasticsearch version 8.6.2
- Elasticsearch version 8.6.1
- Elasticsearch version 8.6.0
- Elasticsearch version 8.5.3
- Elasticsearch version 8.5.2
- Elasticsearch version 8.5.1
- Elasticsearch version 8.5.0
- Elasticsearch version 8.4.3
- Elasticsearch version 8.4.2
- Elasticsearch version 8.4.1
- Elasticsearch version 8.4.0
- Elasticsearch version 8.3.3
- Elasticsearch version 8.3.2
- Elasticsearch version 8.3.1
- Elasticsearch version 8.3.0
- Elasticsearch version 8.2.3
- Elasticsearch version 8.2.2
- Elasticsearch version 8.2.1
- Elasticsearch version 8.2.0
- Elasticsearch version 8.1.3
- Elasticsearch version 8.1.2
- Elasticsearch version 8.1.1
- Elasticsearch version 8.1.0
- Elasticsearch version 8.0.1
- Elasticsearch version 8.0.0
- Elasticsearch version 8.0.0-rc2
- Elasticsearch version 8.0.0-rc1
- Elasticsearch version 8.0.0-beta1
- Elasticsearch version 8.0.0-alpha2
- Elasticsearch version 8.0.0-alpha1
- Dependencies and versions
API conventions
editAPI conventions
editThe Elasticsearch REST APIs are exposed over HTTP. Except where noted, the following conventions apply across all APIs.
Content-type requirements
editThe type of the content sent in a request body must be specified using
the Content-Type
header. The value of this header must map to one of
the supported formats that the API supports. Most APIs support JSON,
YAML, CBOR, and SMILE. The bulk and multi-search APIs support NDJSON,
JSON, and SMILE; other types will result in an error response.
When using the source
query string parameter, the content type must be
specified using the source_content_type
query string parameter.
Elasticsearch only supports UTF-8-encoded JSON. Elasticsearch ignores any other encoding headings sent with a request. Responses are also UTF-8 encoded.
X-Opaque-Id
HTTP header
editYou can pass an X-Opaque-Id
HTTP header to track the origin of a request in
Elasticsearch logs and tasks. If provided, Elasticsearch surfaces the X-Opaque-Id
value in the:
- Response of any request that includes the header
- Task management API response
- Slow logs
- Deprecation logs
For the deprecation logs, Elasticsearch also uses the X-Opaque-Id
value to throttle
and deduplicate deprecation warnings. See Deprecation logs throttling.
The X-Opaque-Id
header accepts any arbitrary value. However, we recommend you
limit these values to a finite set, such as an ID per client. Don’t generate a
unique X-Opaque-Id
header for every request. Too many unique X-Opaque-Id
values can prevent Elasticsearch from deduplicating warnings in the deprecation logs.
traceparent
HTTP header
editElasticsearch also supports a traceparent
HTTP header using the
official W3C trace
context spec. You can use the traceparent
header to trace requests across
Elastic products and other services. Because it’s only used for traces, you can
safely generate a unique traceparent
header for each request.
If provided, Elasticsearch surfaces the header’s trace-id
value as trace.id
in the:
For example, the following traceparent
value would produce the following
trace.id
value in the above logs.
`traceparent`: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01 `trace.id`: 0af7651916cd43dd8448eb211c80319c
GET and POST requests
editA number of Elasticsearch GET APIs—most notably the search API—support a request body.
While the GET action makes sense in the context of retrieving information,
GET requests with a body are not supported by all HTTP libraries.
All Elasticsearch GET APIs that require a body can also be submitted as POST requests.
Alternatively, you can pass the request body as the
source
query string parameter
when using GET.
Cron expressions
editA cron expression is a string of the following form:
<seconds> <minutes> <hours> <day_of_month> <month> <day_of_week> [year]
Elasticsearch uses the cron parser from the Quartz Job Scheduler. For more information about writing Quartz cron expressions, see the Quartz CronTrigger Tutorial.
All schedule times are in coordinated universal time (UTC); other timezones are not supported.
You can use the elasticsearch-croneval command line tool to validate your cron expressions.
Cron expression elements
editAll elements are required except for year
.
See Cron special characters for information about the allowed special characters.
-
<seconds>
-
(Required)
Valid values:
0
-59
and the special characters,
-
*
/
-
<minutes>
-
(Required)
Valid values:
0
-59
and the special characters,
-
*
/
-
<hours>
-
(Required)
Valid values:
0
-23
and the special characters,
-
*
/
-
<day_of_month>
-
(Required)
Valid values:
1
-31
and the special characters,
-
*
/
?
L
W
-
<month>
-
(Required)
Valid values:
1
-12
,JAN
-DEC
,jan
-dec
, and the special characters,
-
*
/
-
<day_of_week>
-
(Required)
Valid values:
1
-7
,SUN
-SAT
,sun
-sat
, and the special characters,
-
*
/
?
L
#
-
<year>
-
(Optional)
Valid values:
1970
-2099
and the special characters,
-
*
/
Cron special characters
edit-
*
-
Selects every possible value for a field. For
example,
*
in thehours
field means "every hour". -
?
-
No specific value. Use when you don’t care what the value
is. For example, if you want the schedule to trigger on a
particular day of the month, but don’t care what day of
the week that happens to be, you can specify
?
in theday_of_week
field. -
-
-
A range of values (inclusive). Use to separate a minimum
and maximum value. For example, if you want the schedule
to trigger every hour between 9:00 a.m. and 5:00 p.m., you
could specify
9-17
in thehours
field. -
,
-
Multiple values. Use to separate multiple values for a
field. For example, if you want the schedule to trigger
every Tuesday and Thursday, you could specify
TUE,THU
in theday_of_week
field. -
/
-
Increment. Use to separate values when specifying a time
increment. The first value represents the starting point,
and the second value represents the interval. For example,
if you want the schedule to trigger every 20 minutes
starting at the top of the hour, you could specify
0/20
in theminutes
field. Similarly, specifying1/5
inday_of_month
field will trigger every 5 days starting on the first day of the month. -
L
-
Last. Use in the
day_of_month
field to mean the last day of the month—day 31 for January, day 28 for February in non-leap years, day 30 for April, and so on. Use alone in theday_of_week
field in place of7
orSAT
, or after a particular day of the week to select the last day of that type in the month. For example6L
means the last Friday of the month. You can specifyLW
in theday_of_month
field to specify the last weekday of the month. Avoid using theL
option when specifying lists or ranges of values, as the results likely won’t be what you expect. -
W
-
Weekday. Use to specify the weekday (Monday-Friday) nearest
the given day. As an example, if you specify
15W
in theday_of_month
field and the 15th is a Saturday, the schedule will trigger on the 14th. If the 15th is a Sunday, the schedule will trigger on Monday the 16th. If the 15th is a Tuesday, the schedule will trigger on Tuesday the 15th. However if you specify1W
as the value forday_of_month
, and the 1st is a Saturday, the schedule will trigger on Monday the 3rd—it won’t jump over the month boundary. You can specifyLW
in theday_of_month
field to specify the last weekday of the month. You can only use theW
option when theday_of_month
is a single day—it is not valid when specifying a range or list of days. -
#
-
Nth XXX day in a month. Use in the
day_of_week
field to specify the nth XXX day of the month. For example, if you specify6#1
, the schedule will trigger on the first Friday of the month. Note that if you specify3#5
and there are not 5 Tuesdays in a particular month, the schedule won’t trigger that month.
Examples
editSetting daily triggers
edit-
0 5 9 * * ?
- Trigger at 9:05 a.m. UTC every day.
-
0 5 9 * * ? 2020
- Trigger at 9:05 a.m. UTC every day during the year 2020.
Restricting triggers to a range of days or times
edit-
0 5 9 ? * MON-FRI
- Trigger at 9:05 a.m. UTC Monday through Friday.
-
0 0-5 9 * * ?
- Trigger every minute starting at 9:00 a.m. UTC and ending at 9:05 a.m. UTC every day.
Setting interval triggers
edit-
0 0/15 9 * * ?
- Trigger every 15 minutes starting at 9:00 a.m. UTC and ending at 9:45 a.m. UTC every day.
-
0 5 9 1/3 * ?
- Trigger at 9:05 a.m. UTC every 3 days every month, starting on the first day of the month.
Setting schedules that trigger on a particular day
edit-
0 1 4 1 4 ?
- Trigger every April 1st at 4:01 a.m. UTC.
-
0 0,30 9 ? 4 WED
- Trigger at 9:00 a.m. UTC and at 9:30 a.m. UTC every Wednesday in the month of April.
-
0 5 9 15 * ?
- Trigger at 9:05 a.m. UTC on the 15th day of every month.
-
0 5 9 15W * ?
- Trigger at 9:05 a.m. UTC on the nearest weekday to the 15th of every month.
-
0 5 9 ? * 6#1
- Trigger at 9:05 a.m. UTC on the first Friday of every month.
Setting triggers using last
edit-
0 5 9 L * ?
- Trigger at 9:05 a.m. UTC on the last day of every month.
-
0 5 9 ? * 2L
- Trigger at 9:05 a.m. UTC on the last Monday of every month.
-
0 5 9 LW * ?
- Trigger at 9:05 a.m. UTC on the last weekday of every month.
Date math support in index and index alias names
editDate math name resolution lets you to search a range of time series indices or index aliases rather than searching all of your indices and filtering the results. Limiting the number of searched indices reduces cluster load and improves search performance. For example, if you are searching for errors in your daily logs, you can use a date math name template to restrict the search to the past two days.
Most APIs that accept an index or index alias argument support date math. A date math name takes the following form:
<static_name{date_math_expr{date_format|time_zone}}>
Where:
|
Static text |
|
Dynamic date math expression that computes the date dynamically |
|
Optional format in which the computed date should be rendered. Defaults to |
|
Optional time zone. Defaults to |
Pay attention to the usage of small vs capital letters used in the date_format
. For example:
mm
denotes minute of hour, while MM
denotes month of year. Similarly hh
denotes the hour in the
1-12
range in combination with AM/PM
, while HH
denotes the hour in the 0-23
24-hour range.
Date math expressions are resolved locale-independent. Consequently, it is not possible to use any other calendars than the Gregorian calendar.
You must enclose date math names in angle brackets. If you use the name in a request path, special characters must be URI encoded. For example:
resp = client.indices.create( index="<my-index-{now/d}>", ) print(resp)
response = client.indices.create( index: '<my-index-{now/d}>' ) puts response
const response = await client.indices.create({ index: "<my-index-{now/d}>", }); console.log(response);
# PUT /<my-index-{now/d}> PUT /%3Cmy-index-%7Bnow%2Fd%7D%3E
Percent encoding of date math characters
The special characters used for date rounding must be URI encoded as follows:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The following example shows different forms of date math names and the final names they resolve to given the current time is 22nd March 2024 noon UTC.
Expression | Resolves to |
---|---|
|
|
|
|
|
|
|
|
|
|
To use the characters {
and }
in the static part of a name template, escape them
with a backslash \
, for example:
-
<elastic\{ON\}-{now/M}>
resolves toelastic{ON}-2024.03.01
The following example shows a search request that searches the Logstash indices for the past
three days, assuming the indices use the default Logstash index name format,
logstash-YYYY.MM.dd
.
$params = [ 'index' => '%3Clogstash-%7Bnow%2Fd-2d%7D%3E%2C%3Clogstash-%7Bnow%2Fd-1d%7D%3E%2C%3Clogstash-%7Bnow%2Fd%7D%3E', 'body' => [ 'query' => [ 'match' => [ 'test' => 'data', ], ], ], ]; $response = $client->search($params);
resp = client.search( index="<logstash-{now/d-2d}>,<logstash-{now/d-1d}>,<logstash-{now/d}>", query={ "match": { "test": "data" } }, ) print(resp)
response = client.search( index: '<logstash-{now/d-2d}>,<logstash-{now/d-1d}>,<logstash-{now/d}>', body: { query: { match: { test: 'data' } } } ) puts response
res, err := es.Search( es.Search.WithIndex("%3Clogstash-%7Bnow%2Fd-2d%7D%3E%2C%3Clogstash-%7Bnow%2Fd-1d%7D%3E%2C%3Clogstash-%7Bnow%2Fd%7D%3E"), es.Search.WithBody(strings.NewReader(`{ "query": { "match": { "test": "data" } } }`)), es.Search.WithPretty(), ) fmt.Println(res, err)
const response = await client.search({ index: "<logstash-{now/d-2d}>,<logstash-{now/d-1d}>,<logstash-{now/d}>", query: { match: { test: "data", }, }, }); console.log(response);
# GET /<logstash-{now/d-2d}>,<logstash-{now/d-1d}>,<logstash-{now/d}>/_search GET /%3Clogstash-%7Bnow%2Fd-2d%7D%3E%2C%3Clogstash-%7Bnow%2Fd-1d%7D%3E%2C%3Clogstash-%7Bnow%2Fd%7D%3E/_search { "query" : { "match": { "test": "data" } } }
Multi-target syntax
editMost APIs that accept a <data-stream>
, <index>
, or <target>
request path
parameter also support multi-target syntax.
In multi-target syntax, you can use a comma-separated list to run a request on
multiple resources, such as data streams, indices, or aliases:
test1,test2,test3
. You can also use glob-like
wildcard (*
) expressions to target resources that match a pattern: test*
or
*test
or te*t
or *test*
.
You can exclude targets using the -
character: test*,-test3
.
Aliases are resolved after wildcard expressions. This can result in a
request that targets an excluded alias. For example, if test3
is an index
alias, the pattern test*,-test3
still targets the indices for test3
. To
avoid this, exclude the concrete indices for the alias instead.
You can also exclude clusters from a list of clusters to search using the -
character:
remote*:*,-remote1:*,-remote4:*
will search all clusters with an alias that starts
with "remote" except for "remote1" and "remote4". Note that to exclude a cluster
with this notation you must exclude all of its indexes. Excluding a subset of indexes
on a remote cluster is currently not supported. For example, this will throw an exception:
remote*:*,-remote1:logs*
.
Multi-target APIs that can target indices support the following query string parameters:
-
ignore_unavailable
-
(Optional, Boolean) If
false
, the request returns an error if it targets a missing or closed index. Defaults tofalse
. -
allow_no_indices
-
(Optional, Boolean)
If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
-
(Optional, string) Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:-
all
- Match any data stream or index, including hidden ones.
-
open
- Match open, non-hidden indices. Also matches any non-hidden data stream.
-
closed
- Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
-
hidden
-
Match hidden data streams and hidden indices. Must be combined with
open
,closed
, or both. -
none
- Wildcard patterns are not accepted.
-
The defaults settings for the above parameters depend on the API being used.
Some multi-target APIs that can target indices also support the following query string parameter:
-
ignore_throttled
-
(Optional, Boolean) If
true
, concrete, expanded or aliased indices are ignored when frozen. Defaults totrue
.[7.16.0] Deprecated in 7.16.0.
APIs with a single target, such as the get document API, do not support multi-target syntax.
Hidden data streams and indicesedit
For most APIs, wildcard expressions do not match hidden data streams and indices
by default. To match hidden data streams and indices using a wildcard
expression, you must specify the expand_wildcards
query parameter.
Alternatively, querying an index pattern starting with a dot, such as
.watcher_hist*
, will match hidden indices by default. This is intended to
mirror Unix file-globbing behavior and provide a smoother transition path to
hidden indices.
You can create hidden data streams by setting data_stream.hidden
to true
in
the stream’s matching index template. You can hide
indices using the index.hidden
index setting.
The backing indices for data streams are hidden automatically. Some features, such as machine learning, store information in hidden indices.
Global index templates that match all indices are not applied to hidden indices.
System indices
editElasticsearch modules and plugins can store configuration and state information in internal system indices. You should not directly access or modify system indices as they contain data essential to the operation of the system.
Direct access to system indices is deprecated and will no longer be allowed in a future major version.
Parameters
editRest parameters (when using HTTP, map to HTTP URL parameters) follow the convention of using underscore casing.
Request body in query string
editFor libraries that don’t accept a request body for non-POST requests,
you can pass the request body as the source
query string parameter
instead. When using this method, the source_content_type
parameter
should also be passed with a media type value that indicates the format
of the source, such as application/json
.
REST API version compatibility
editMajor version upgrades often include a number of breaking changes that impact how you interact with Elasticsearch. While we recommend that you monitor the deprecation logs and update applications before upgrading Elasticsearch, having to coordinate the necessary changes can be an impediment to upgrading.
You can enable an existing application to function without modification after an upgrade by including API compatibility headers, which tell Elasticsearch you are still using the previous version of the REST API. Using these headers allows the structure of requests and responses to remain the same; it does not guarantee the same behavior.
You set version compatibility on a per-request basis in the Content-Type
and Accept
headers.
Setting compatible-with
to the same major version as
the version you’re running has no impact,
but ensures that the request will still work after Elasticsearch is upgraded.
To tell Elasticsearch 8.0 you are using the 7.x request and response format,
set compatible-with=7
:
Content-Type: application/vnd.elasticsearch+json; compatible-with=7 Accept: application/vnd.elasticsearch+json; compatible-with=7
HTTP 429 Too Many Requests
status code push back
editElasticsearch APIs may respond with the HTTP 429 Too Many Requests
status code, indicating that the cluster is too busy
to handle the request. When this happens, consider retrying after a short delay. If the retry also receives
a 429 Too Many Requests
response, extend the delay by backing off exponentially before each subsequent retry.
URL-based access control
editMany users use a proxy with URL-based access control to secure access to Elasticsearch data streams and indices. For multi-search, multi-get, and bulk requests, the user has the choice of specifying a data stream or index in the URL and on each individual request within the request body. This can make URL-based access control challenging.
To prevent the user from overriding the data stream or index specified in the
URL, set rest.action.multi.allow_explicit_index
to false
in elasticsearch.yml
.
This causes Elasticsearch to reject requests that explicitly specify a data stream or index in the request body.
Boolean Values
editAll REST API parameters (both request parameters and JSON body) support
providing boolean "false" as the value false
and boolean "true" as the
value true
. All other values will raise an error.
Number Values
editWhen passing a numeric parameter in a request body, you may use a string
containing the number instead of the native numeric type. For example:
resp = client.search( size="1000", ) print(resp)
response = client.search( body: { size: '1000' } ) puts response
const response = await client.search({ size: 1000, }); console.log(response);
POST /_search { "size": "1000" }
Integer-valued fields in a response body are described as integer
(or
occasionally long
) in this manual, but there are generally no explicit bounds
on such values. JSON, SMILE, CBOR and YAML all permit arbitrarily large integer
values. Do not assume that integer
fields in a response body will always fit
into a 32-bit signed integer.
Byte size units
editWhenever the byte size of data needs to be specified, e.g. when setting a buffer size
parameter, the value must specify the unit, like 10kb
for 10 kilobytes. Note that
these units use powers of 1024, so 1kb
means 1024 bytes. The supported units are:
|
Bytes |
|
Kilobytes |
|
Megabytes |
|
Gigabytes |
|
Terabytes |
|
Petabytes |
Distance Units
editWherever distances need to be specified, such as the distance
parameter in
the Geo-distance), the default unit is meters if none is specified.
Distances can be specified in other units, such as "1km"
or
"2mi"
(2 miles).
The full list of units is listed below:
Mile |
|
Yard |
|
Feet |
|
Inch |
|
Kilometer |
|
Meter |
|
Centimeter |
|
Millimeter |
|
Nautical mile |
|
Time units
editWhenever durations need to be specified, e.g. for a timeout
parameter, the duration must specify
the unit, like 2d
for 2 days. The supported units are:
|
Days |
|
Hours |
|
Minutes |
|
Seconds |
|
Milliseconds |
|
Microseconds |
|
Nanoseconds |
Unit-less quantities
editUnit-less quantities means that they don’t have a "unit" like "bytes" or "Hertz" or "meter" or "long tonne".
If one of these quantities is large we’ll print it out like 10m for 10,000,000 or 7k for 7,000. We’ll still print 87 when we mean 87 though. These are the supported multipliers:
|
Kilo |
|
Mega |
|
Giga |
|
Tera |
|
Peta |
On this page
- Content-type requirements
X-Opaque-Id
HTTP headertraceparent
HTTP header- GET and POST requests
- Cron expressions
- Cron expression elements
- Cron special characters
- Examples
- Setting daily triggers
- Restricting triggers to a range of days or times
- Setting interval triggers
- Setting schedules that trigger on a particular day
- Setting triggers using last
- Date math support in index and index alias names
- Multi-target syntax
- Hidden data streams and indices
- System indices
- Parameters
- Request body in query string
- REST API version compatibility
- HTTP
429 Too Many Requests
status code push back - URL-based access control
- Boolean Values
- Number Values
- Byte size units
- Distance Units
- Time units
- Unit-less quantities