- Elasticsearch Guide: other versions:
- What is Elasticsearch?
- What’s new in 8.10
- Set up Elasticsearch
- Installing Elasticsearch
- Run Elasticsearch locally
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Miscellaneous cluster settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Field data cache settings
- Health Diagnostic settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- License settings
- Local gateway settings
- Logging
- Machine learning settings
- Monitoring settings
- Node
- Networking
- Node query cache settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Advanced configuration
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Attachment
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- Geo-grid
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Redact
- Registered domain
- Remove
- Rename
- Reroute
- Script
- Set
- Set security user
- Sort
- Split
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Aliases
- Search your data
- Collapse search results
- Filter search results
- Highlighting
- Long-running searches
- Near real-time search
- Paginate search results
- Retrieve inner hits
- Retrieve selected fields
- Search across clusters
- Search multiple data streams and indices
- Search shard routing
- Search templates
- Search with synonyms
- Sort search results
- kNN search
- Semantic search
- Searching with query rules
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Frequent item sets
- Geo-distance
- Geohash grid
- Geohex grid
- Geotile grid
- Global
- Histogram
- IP prefix
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Random sampler
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Time series
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Change point
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- Geospatial analysis
- EQL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Tutorial: Customize built-in policies
- Tutorial: Automate rollover
- Index management in Kibana
- Overview
- Concepts
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Data tiers
- Autoscaling
- Monitor a cluster
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled automatically
- Manually configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- User profiles
- Realms
- Realm chains
- Security domains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- JWT authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Looking up users without authentication
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Role restriction
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watcher
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- How to
- Troubleshooting
- Fix common cluster issues
- Diagnose unassigned shards
- Add a missing tier to the system
- Allow Elasticsearch to allocate the data in the system
- Allow Elasticsearch to allocate the index
- Indices mix index allocation filters with data tiers node roles to move through data tiers
- Not enough nodes to allocate all shard replicas
- Total number of shards for an index on a single node exceeded
- Total number of shards per node has been reached
- Troubleshooting corruption
- Fix data nodes out of disk
- Fix master nodes out of disk
- Fix other role nodes out of disk
- Start index lifecycle management
- Start Snapshot Lifecycle Management
- Restore from snapshot
- Multiple deployments writing to the same snapshot repository
- Addressing repeated snapshot policy failures
- Troubleshooting an unstable cluster
- Troubleshooting discovery
- Troubleshooting monitoring
- Troubleshooting transforms
- Troubleshooting Watcher
- Troubleshooting searches
- Troubleshooting shards capacity health issues
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Behavioral Analytics APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat component templates
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Prevalidate node removal
- Nodes reload secure settings
- Nodes stats
- Cluster Info
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Create or update desired nodes
- Get desired nodes
- Delete desired nodes
- Get desired balance
- Reset desired balance
- Cross-cluster replication APIs
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- Features APIs
- Fleet APIs
- Find structure API
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Ingest APIs
- Info API
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Clear trained model deployment cache
- Create or update trained model aliases
- Create part of a trained model
- Create trained models
- Create trained model vocabulary
- Delete trained model aliases
- Delete trained models
- Get trained models
- Get trained models stats
- Infer trained model
- Start trained model deployment
- Stop trained model deployment
- Update trained model deployment
- Migration APIs
- Node lifecycle APIs
- Query rules APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Script APIs
- Search APIs
- Search Application APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get service accounts
- Get service account credentials
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- Update API key
- Bulk update API keys
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Activate user profile
- Disable user profile
- Enable user profile
- Get user profiles
- Suggest user profile
- Update user profile data
- Has privileges user profile
- Create Cross-Cluster API key
- Update Cross-Cluster API key
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Synonyms APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Migration guide
- Release notes
- Elasticsearch version 8.10.4
- Elasticsearch version 8.10.3
- Elasticsearch version 8.10.2
- Elasticsearch version 8.10.1
- Elasticsearch version 8.10.0
- Elasticsearch version 8.9.2
- Elasticsearch version 8.9.1
- Elasticsearch version 8.9.0
- Elasticsearch version 8.8.2
- Elasticsearch version 8.8.1
- Elasticsearch version 8.8.0
- Elasticsearch version 8.7.1
- Elasticsearch version 8.7.0
- Elasticsearch version 8.6.2
- Elasticsearch version 8.6.1
- Elasticsearch version 8.6.0
- Elasticsearch version 8.5.3
- Elasticsearch version 8.5.2
- Elasticsearch version 8.5.1
- Elasticsearch version 8.5.0
- Elasticsearch version 8.4.3
- Elasticsearch version 8.4.2
- Elasticsearch version 8.4.1
- Elasticsearch version 8.4.0
- Elasticsearch version 8.3.3
- Elasticsearch version 8.3.2
- Elasticsearch version 8.3.1
- Elasticsearch version 8.3.0
- Elasticsearch version 8.2.3
- Elasticsearch version 8.2.2
- Elasticsearch version 8.2.1
- Elasticsearch version 8.2.0
- Elasticsearch version 8.1.3
- Elasticsearch version 8.1.2
- Elasticsearch version 8.1.1
- Elasticsearch version 8.1.0
- Elasticsearch version 8.0.1
- Elasticsearch version 8.0.0
- Elasticsearch version 8.0.0-rc2
- Elasticsearch version 8.0.0-rc1
- Elasticsearch version 8.0.0-beta1
- Elasticsearch version 8.0.0-alpha2
- Elasticsearch version 8.0.0-alpha1
- Dependencies and versions
Tutorial: Automate rollover with ILM
editTutorial: Automate rollover with ILM
editWhen you continuously index timestamped documents into Elasticsearch, you typically use a data stream so you can periodically roll over to a new index. This enables you to implement a hot-warm-cold architecture to meet your performance requirements for your newest data, control costs over time, enforce retention policies, and still get the most out of your data.
Data streams are best suited for
append-only use cases. If you need to update or delete existing time
series data, you can perform update or delete operations directly on the data stream backing index.
If you frequently send multiple documents using the same _id
expecting last-write-wins, you may
want to use an index alias with a write index instead. You can still use ILM to manage and rollover
the alias’s indices. Skip to Manage time series data without data streams.
To automate rollover and management of a data stream with ILM, you:
- Create a lifecycle policy that defines the appropriate phases and actions.
- Create an index template to create the data stream and apply the ILM policy and the indices settings and mappings configurations for the backing indices.
- Verify indices are moving through the lifecycle phases as expected.
For an introduction to rolling indices, see Rollover.
When you enable index lifecycle management for Beats or the Logstash Elasticsearch output plugin, lifecycle policies are set up automatically. You do not need to take any other actions. You can modify the default policies through Kibana Management or the ILM APIs.
Create a lifecycle policy
editA lifecycle policy specifies the phases in the index lifecycle
and the actions to perform in each phase. A lifecycle can have up to five phases:
hot
, warm
, cold
, frozen
, and delete
.
For example, you might define a timeseries_policy
that has two phases:
-
A
hot
phase that defines a rollover action to specify that an index rolls over when it reaches either amax_primary_shard_size
of 50 gigabytes or amax_age
of 30 days. -
A
delete
phase that setsmin_age
to remove the index 90 days after rollover.
The min_age
value is relative to the rollover time, not the index creation time.
You can create the policy through Kibana or with the create or update policy API. To create the policy from Kibana, open the menu and go to Stack Management > Index Lifecycle Policies. Click Create policy.
![Create policy page](images/ilm/create-policy.png)
API example
response = client.ilm.put_lifecycle( policy: 'timeseries_policy', body: { policy: { phases: { hot: { actions: { rollover: { max_primary_shard_size: '50GB', max_age: '30d' } } }, delete: { min_age: '90d', actions: { delete: {} } } } } } ) puts response
Create an index template to create the data stream and apply the lifecycle policy
editTo set up a data stream, first create an index template to specify the lifecycle policy. Because
the template is for a data stream, it must also include a data_stream
definition.
For example, you might create a timeseries_template
to use for a future data stream
named timeseries
.
To enable the ILM to manage the data stream, the template configures one ILM setting:
-
index.lifecycle.name
specifies the name of the lifecycle policy to apply to the data stream.
You can use the Kibana Create template wizard to add the template. From Kibana, open the menu and go to Stack Management > Index Management. In the Index Templates tab, click Create template.
![Create template page](images/data-streams/create-index-template.png)
This wizard invokes the create or update index template API to create the index template with the options you specify.
API example
response = client.indices.put_index_template( name: 'timeseries_template', body: { index_patterns: [ 'timeseries' ], data_stream: {}, template: { settings: { number_of_shards: 1, number_of_replicas: 1, "index.lifecycle.name": 'timeseries_policy' } } } ) puts response
Create the data stream
editTo get things started, index a document into the name or wildcard pattern defined
in the index_patterns
of the index template. As long
as an existing data stream, index, or index alias does not already use the name, the index
request automatically creates a corresponding data stream with a single backing index.
Elasticsearch automatically indexes the request’s documents into this backing index, which also
acts as the stream’s write index.
For example, the following request creates the timeseries
data stream and the
first generation backing index called .ds-timeseries-2099.03.08-000001
.
POST timeseries/_doc { "message": "logged the request", "@timestamp": "1591890611" }
When a rollover condition in the lifecycle policy is met, the rollover
action:
-
Creates the second generation backing index, named
.ds-timeseries-2099.03.08-000002
. Because it is a backing index of thetimeseries
data stream, the configuration from thetimeseries_template
index template is applied to the new index. -
As it is the latest generation index of the
timeseries
data stream, the newly created backing index.ds-timeseries-2099.03.08-000002
becomes the data stream’s write index.
This process repeats each time a rollover condition is met.
You can search across all of the data stream’s backing indices, managed by the timeseries_policy
,
with the timeseries
data stream name.
Write operations are routed to the current write index. Read operations will be handled by all
backing indices.
Check lifecycle progress
editTo get status information for managed indices, you use the ILM explain API. This lets you find out things like:
- What phase an index is in and when it entered that phase.
- The current action and what step is being performed.
- If any errors have occurred or progress is blocked.
For example, the following request gets information about the timeseries
data stream’s
backing indices:
response = client.ilm.explain_lifecycle( index: '.ds-timeseries-*' ) puts response
GET .ds-timeseries-*/_ilm/explain
The following response shows the data stream’s first generation backing index is waiting for the hot
phase’s rollover
action.
It remains in this state and ILM continues to call check-rollover-ready
until a rollover condition
is met.
{ "indices": { ".ds-timeseries-2099.03.07-000001": { "index": ".ds-timeseries-2099.03.07-000001", "index_creation_date_millis": 1538475653281, "time_since_index_creation": "30s", "managed": true, "policy": "timeseries_policy", "lifecycle_date_millis": 1538475653281, "age": "30s", "phase": "hot", "phase_time_millis": 1538475653317, "action": "rollover", "action_time_millis": 1538475653317, "step": "check-rollover-ready", "step_time_millis": 1538475653317, "phase_execution": { "policy": "timeseries_policy", "phase_definition": { "min_age": "0ms", "actions": { "rollover": { "max_primary_shard_size": "50gb", "max_age": "30d" } } }, "version": 1, "modified_date_in_millis": 1539609701576 } } } }
The age of the index used for calculating when to rollover the index via the |
|
The policy used to manage the index |
|
The age of the indexed used to transition to the next phase (in this case it is the same with the age of the index). |
|
The step ILM is performing on the index |
|
The definition of the current phase (the |
Manage time series data without data streams
editEven though data streams are a convenient way to scale and manage time series data, they are designed to be append-only. We recognise there might be use-cases where data needs to be updated or deleted in place and the data streams don’t support delete and update requests directly, so the index APIs would need to be used directly on the data stream’s backing indices. In these cases we still recommend using a data stream.
If you frequently send multiple documents using the same _id
expecting last-write-wins, you can
use an index alias instead of a data stream to manage indices containing the time series data and
periodically roll over to a new index.
To automate rollover and management of time series indices with ILM using an index alias, you:
- Create a lifecycle policy that defines the appropriate phases and actions. See Create a lifecycle policy above.
- Create an index template to apply the policy to each new index.
- Bootstrap an index as the initial write index.
- Verify indices are moving through the lifecycle phases as expected.
Create an index template to apply the lifecycle policy
editTo automatically apply a lifecycle policy to the new write index on rollover, specify the policy in the index template used to create new indices.
For example, you might create a timeseries_template
that is applied to new indices
whose names match the timeseries-*
index pattern.
To enable automatic rollover, the template configures two ILM settings:
-
index.lifecycle.name
specifies the name of the lifecycle policy to apply to new indices that match the index pattern. -
index.lifecycle.rollover_alias
specifies the index alias to be rolled over when the rollover action is triggered for an index.
You can use the Kibana Create template wizard to add the template. To access the wizard, open the menu and go to Stack Management > Index Management. In the Index Templates tab, click Create template.
The create template request for the example template looks like this:
PUT _index_template/timeseries_template { "index_patterns": ["timeseries-*"], "template": { "settings": { "number_of_shards": 1, "number_of_replicas": 1, "index.lifecycle.name": "timeseries_policy", "index.lifecycle.rollover_alias": "timeseries" } } }
Apply the template to a new index if its name starts with |
|
The name of the lifecycle policy to apply to each new index. |
|
The name of the alias used to reference these indices. Required for policies that use the rollover action. |
Bootstrap the initial time series index with a write index alias
editTo get things started, you need to bootstrap an initial index and designate it as the write index for the rollover alias specified in your index template. The name of this index must match the template’s index pattern and end with a number. On rollover, this value is incremented to generate a name for the new index.
For example, the following request creates an index called timeseries-000001
and makes it the write index for the timeseries
alias.
PUT timeseries-000001 { "aliases": { "timeseries": { "is_write_index": true } } }
When the rollover conditions are met, the rollover
action:
-
Creates a new index called
timeseries-000002
. This matches thetimeseries-*
pattern, so the settings fromtimeseries_template
are applied to the new index. - Designates the new index as the write index and makes the bootstrap index read-only.
This process repeats each time rollover conditions are met.
You can search across all of the indices managed by the timeseries_policy
with the timeseries
alias.
Write operations are routed to the current write index.
Check lifecycle progress
editRetrieving the status information for managed indices is very similar to the data stream case. See the data stream check progress section for more information. The only difference is the indices namespace, so retrieving the progress will entail the following api call:
response = client.ilm.explain_lifecycle( index: 'timeseries-*' ) puts response
GET timeseries-*/_ilm/explain
On this page
- Create a lifecycle policy
- Create an index template to create the data stream and apply the lifecycle policy
- Create the data stream
- Check lifecycle progress
- Manage time series data without data streams
- Create an index template to apply the lifecycle policy
- Bootstrap the initial time series index with a write index alias
- Check lifecycle progress