- Elasticsearch Guide: other versions:
- What is Elasticsearch?
- What’s new in 7.10
- Getting started with Elasticsearch
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Setting JVM options
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Field data cache settings
- HTTP
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- License settings
- Local gateway settings
- Logging
- Machine learning settings
- Monitoring settings
- Node
- Network settings
- Node query cache settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot lifecycle management settings
- Transforms settings
- Transport
- Thread pools
- Watcher settings
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Set up X-Pack
- Configuring X-Pack Java Clients
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest node
- Search your data
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Geo-distance
- Geohash grid
- Geotile grid
- Global
- Histogram
- IP range
- Missing
- Nested
- Parent
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Bucket aggregations
- EQL
- SQL access
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Overview
- Concepts
- Automate rollover
- Manage Filebeat time-based indices
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Resolve lifecycle policy execution errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Monitor a cluster
- Frozen indices
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure a cluster
- Overview
- Configuring security
- User authentication
- Built-in users
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Granting access to Stack Management features
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and index aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enabling audit logging
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watch for cluster and index events
- Command line tools
- How To
- Glossary of terms
- REST APIs
- API conventions
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Nodes reload secure settings
- Nodes stats
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Cross-cluster replication APIs
- Data stream APIs
- Document APIs
- Enrich APIs
- Graph explore API
- Index APIs
- Add index alias
- Analyze
- Clear cache
- Clone index
- Close index
- Create index
- Delete index
- Delete index alias
- Delete component template
- Delete index template
- Delete index template (legacy)
- Flush
- Force merge
- Freeze index
- Get component template
- Get field mapping
- Get index
- Get index alias
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Index alias exists
- Index exists
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- Open index
- Put index template
- Put index template (legacy)
- Put component template
- Put mapping
- Refresh
- Rollover index
- Shrink index
- Simulate index
- Simulate template
- Split index
- Synced flush
- Type exists
- Unfreeze index
- Update index alias
- Update index settings
- Resolve index
- List dangling indices
- Import dangling index
- Delete dangling index
- Index lifecycle management APIs
- Ingest APIs
- Info API
- Licensing APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Find file structure
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Create trained models
- Update data frame analytics jobs
- Delete data frame analytics jobs
- Delete trained models
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Get trained models
- Get trained models stats
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Migration APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Search APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SSL certificate
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Migration guide
- Release notes
- Elasticsearch version 7.10.2
- Elasticsearch version 7.10.1
- Elasticsearch version 7.10.0
- Elasticsearch version 7.9.3
- Elasticsearch version 7.9.2
- Elasticsearch version 7.9.1
- Elasticsearch version 7.9.0
- Elasticsearch version 7.8.1
- Elasticsearch version 7.8.0
- Elasticsearch version 7.7.1
- Elasticsearch version 7.7.0
- Elasticsearch version 7.6.2
- Elasticsearch version 7.6.1
- Elasticsearch version 7.6.0
- Elasticsearch version 7.5.2
- Elasticsearch version 7.5.1
- Elasticsearch version 7.5.0
- Elasticsearch version 7.4.2
- Elasticsearch version 7.4.1
- Elasticsearch version 7.4.0
- Elasticsearch version 7.3.2
- Elasticsearch version 7.3.1
- Elasticsearch version 7.3.0
- Elasticsearch version 7.2.1
- Elasticsearch version 7.2.0
- Elasticsearch version 7.1.1
- Elasticsearch version 7.1.0
- Elasticsearch version 7.0.0
- Elasticsearch version 7.0.0-rc2
- Elasticsearch version 7.0.0-rc1
- Elasticsearch version 7.0.0-beta1
- Elasticsearch version 7.0.0-alpha2
- Elasticsearch version 7.0.0-alpha1
- Dependencies and versions
Change mappings and settings for a data stream
editChange mappings and settings for a data stream
editEach data stream has a matching index template. Mappings and index settings from this template are applied to new backing indices created for the stream. This includes the stream’s first backing index, which is auto-generated when the stream is created.
Before creating a data stream, we recommend you carefully consider which mappings and settings to include in this template.
If you later need to change the mappings or settings for a data stream, you have a few options:
If your changes include modifications to existing field mappings or static index settings, a reindex is often required to apply the changes to a data stream’s backing indices. If you are already performing a reindex, you can use the same process to add new field mappings and change dynamic index settings. See Use reindex to change mappings or settings.
Add a new field mapping to a data stream
editTo add a mapping for a new field to a data stream, following these steps:
-
Update the index template used by the data stream. This ensures the new field mapping is added to future backing indices created for the stream.
For example,
my-data-stream-template
is an existing index template used bymy-data-stream
.The following put index template request adds a mapping for a new field,
message
, to the template. -
Use the put mapping API to add the new field mapping to the data stream. By default, this adds the mapping to the stream’s existing backing indices, including the write index.
The following put mapping API request adds the new
message
field mapping tomy-data-stream
.PUT /my-data-stream/_mapping { "properties": { "message": { "type": "text" } } }
To add the mapping only to the stream’s write index, set the put mapping API’s
write_index_only
query parameter totrue
.The following put mapping request adds the new
message
field mapping only tomy-data-stream
's write index. The new field mapping is not added to the stream’s other backing indices.PUT /my-data-stream/_mapping?write_index_only=true { "properties": { "message": { "type": "text" } } }
Change an existing field mapping in a data stream
editThe documentation for each mapping parameter indicates whether you can update it for an existing field using the put mapping API. To update these parameters for an existing field, follow these steps:
-
Update the index template used by the data stream. This ensures the updated field mapping is added to future backing indices created for the stream.
For example,
my-data-stream-template
is an existing index template used bymy-data-stream
.The following put index template request changes the argument for the
host.ip
field’signore_malformed
mapping parameter totrue
. -
Use the put mapping API to apply the mapping changes to the data stream. By default, this applies the changes to the stream’s existing backing indices, including the write index.
The following put mapping API request targets
my-data-stream
. The request changes the argument for thehost.ip
field’signore_malformed
mapping parameter totrue
.PUT /my-data-stream/_mapping { "properties": { "host": { "properties": { "ip": { "type": "ip", "ignore_malformed": true } } } } }
To apply the mapping changes only to the stream’s write index, set the put mapping API’s
write_index_only
query parameter totrue
.The following put mapping request changes the
host.ip
field’s mapping only formy-data-stream
's write index. The change is not applied to the stream’s other backing indices.PUT /my-data-stream/_mapping?write_index_only=true { "properties": { "host": { "properties": { "ip": { "type": "ip", "ignore_malformed": true } } } } }
Except for supported mapping parameters, we don’t recommend you change the mapping or field data type of existing fields, even in a data stream’s matching index template or its backing indices. Changing the mapping of an existing field could invalidate any data that’s already indexed.
If you need to change the mapping of an existing field, create a new data stream and reindex your data into it. See Use reindex to change mappings or settings.
Change a dynamic index setting for a data stream
editTo change a dynamic index setting for a data stream, follow these steps:
-
Update the index template used by the data stream. This ensures the setting is applied to future backing indices created for the stream.
For example,
my-data-stream-template
is an existing index template used bymy-data-stream
.The following put index template request changes the template’s
index.refresh_interval
index setting to30s
(30 seconds). -
Use the update index settings API to update the index setting for the data stream. By default, this applies the setting to the stream’s existing backing indices, including the write index.
The following update index settings API request updates the
index.refresh_interval
setting formy-data-stream
.PUT /my-data-stream/_settings { "index": { "refresh_interval": "30s" } }
Change a static index setting for a data stream
editStatic index settings can only be set when a backing index is created. You cannot update static index settings using the update index settings API.
To apply a new static setting to future backing indices, update the index template used by the data stream. The setting is automatically applied to any backing index created after the update.
For example, my-data-stream-template
is an existing index template used by
my-data-stream
.
The following put index template API requests adds new
sort.field
and sort.order index
settings to the template.
PUT /_index_template/my-data-stream-template { "index_patterns": [ "my-data-stream*" ], "data_stream": { }, "priority": 200, "template": { "settings": { "sort.field": [ "@timestamp"], "sort.order": [ "desc"] } } }
If wanted, you can roll over the data stream to immediately apply the setting to the data stream’s write index. This affects any new data added to the stream after the rollover. However, it does not affect the data stream’s existing backing indices or existing data.
To apply static setting changes to existing backing indices, you must create a new data stream and reindex your data into it. See Use reindex to change mappings or settings.
Use reindex to change mappings or settings
editYou can use a reindex to change the mappings or settings of a data stream. This is often required to change the data type of an existing field or update static index settings for backing indices.
To reindex a data stream, first create or update an index template so that it contains the wanted mapping or setting changes. You can then reindex the existing data stream into a new stream matching the template. This applies the mapping and setting changes in the template to each document and backing index added to the new data stream. These changes also affect any future backing index created by the new stream.
Follow these steps:
-
Choose a name or index pattern for a new data stream. This new data stream will contain data from your existing stream.
You can use the resolve index API to check if the name or pattern matches any existing indices, index aliases, or data streams. If so, you should consider using another name or pattern.
The following resolve index API request checks for any existing indices, index aliases, or data streams that start with
new-data-stream
. If not, thenew-data-stream*
index pattern can be used to create a new data stream.GET /_resolve/index/new-data-stream*
The API returns the following response, indicating no existing targets match this pattern.
{ "indices": [ ], "aliases": [ ], "data_streams": [ ] }
-
Create or update an index template. This template should contain the mappings and settings you’d like to apply to the new data stream’s backing indices.
This index template must meet the requirements for a data stream template. It should also contain your previously chosen name or index pattern in the
index_patterns
property.If you are only adding or changing a few things, we recommend you create a new template by copying an existing one and modifying it as needed.
For example,
my-data-stream-template
is an existing index template used bymy-data-stream
.The following put index template API request creates a new index template,
new-data-stream-template
.new-data-stream-template
usesmy-data-stream-template
as its basis, with the following changes:-
The index pattern in
index_patterns
matches any index or data stream starting withnew-data-stream
. -
The
@timestamp
field mapping uses thedate_nanos
field data type rather than thedate
data type. -
The template includes
sort.field
andsort.order
index settings, which were not in the originalmy-data-stream-template
template.
-
The index pattern in
-
Use the create data stream API to manually create the new data stream. The name of the data stream must match the index pattern defined in the new template’s
index_patterns
property.We do not recommend indexing new data to create this data stream. Later, you will reindex older data from an existing data stream into this new stream. This could result in one or more backing indices that contains a mix of new and old data.
Mixing new and old data in a data stream
While mixing new and old data is safe, it could interfere with data retention. If you delete older indices, you could accidentally delete a backing index that contains both new and old data. To prevent premature data loss, you would need to retain such a backing index until you are ready to delete its newest data.
The following create data stream API request targets
new-data-stream
, which matches the index pattern fornew-data-stream-template
. Because no existing index or data stream uses this name, this request creates thenew-data-stream
data stream.PUT /_data_stream/new-data-stream
- If you do not want to mix new and old data in your new data stream, pause the indexing of new documents. While mixing old and new data is safe, it could interfere with data retention. See Mixing new and old data in a data stream.
-
If you use ILM to automate rollover, reduce the ILM poll interval. This ensures the current write index doesn’t grow too large while waiting for the rollover check. By default, ILM checks rollover conditions every 10 minutes.
The following update cluster settings API request lowers the
indices.lifecycle.poll_interval
setting to1m
(one minute).PUT /_cluster/settings { "transient": { "indices.lifecycle.poll_interval": "1m" } }
-
Reindex your data to the new data stream using an
op_type
ofcreate
.If you want to partition the data in the order in which it was originally indexed, you can run separate reindex requests. These reindex requests can use individual backing indices as the source. You can use the get data stream API to retrieve a list of backing indices.
For example, you plan to reindex data from
my-data-stream
intonew-data-stream
. However, you want to submit a separate reindex request for each backing index inmy-data-stream
, starting with the oldest backing index. This preserves the order in which the data was originally indexed.The following get data stream API request retrieves information about
my-data-stream
, including a list of its backing indices.GET /_data_stream/my-data-stream
The API returns the following response. Note the
indices
property contains an array of the stream’s current backing indices. The first item in the array contains information about the stream’s oldest backing index,.ds-my-data-stream-000001
.{ "data_streams": [ { "name": "my-data-stream", "timestamp_field": { "name": "@timestamp" }, "indices": [ { "index_name": ".ds-my-data-stream-000001", "index_uuid": "Gpdiyq8sRuK9WuthvAdFbw" }, { "index_name": ".ds-my-data-stream-000002", "index_uuid": "_eEfRrFHS9OyhqWntkgHAQ" } ], "generation": 2, "status": "GREEN", "template": "my-data-stream-template" } ] }
First item in the
indices
array formy-data-stream
. This item contains information about the stream’s oldest backing index,.ds-my-data-stream-000001
.The following reindex API request copies documents from
.ds-my-data-stream-000001
tonew-data-stream
. Note the request’sop_type
iscreate
.POST /_reindex { "source": { "index": ".ds-my-data-stream-000001" }, "dest": { "index": "new-data-stream", "op_type": "create" } }
You can also use a query to reindex only a subset of documents with each request.
The following reindex API request copies documents from
my-data-stream
tonew-data-stream
. The request uses arange
query to only reindex documents with a timestamp within the last week. Note the request’sop_type
iscreate
.POST /_reindex { "source": { "index": "my-data-stream", "query": { "range": { "@timestamp": { "gte": "now-7d/d", "lte": "now/d" } } } }, "dest": { "index": "new-data-stream", "op_type": "create" } }
-
If you previously changed your ILM poll interval, change it back to its original value when reindexing is complete. This prevents unnecessary load on the master node.
The following update cluster settings API request resets the
indices.lifecycle.poll_interval
setting to its default value, 10 minutes.PUT /_cluster/settings { "transient": { "indices.lifecycle.poll_interval": null } }
- Resume indexing using the new data stream. Searches on this stream will now query your new data and the reindexed data.
-
Once you have verified that all reindexed data is available in the new data stream, you can safely remove the old stream.
The following delete data stream API request deletes
my-data-stream
. This request also deletes the stream’s backing indices and any data they contain.DELETE /_data_stream/my-data-stream
On this page