- Elasticsearch Guide: other versions:
- Elasticsearch basics
- Quick starts
- Set up Elasticsearch
- Run Elasticsearch locally
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Miscellaneous cluster settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Field data cache settings
- Health Diagnostic settings
- Index lifecycle management settings
- Data stream lifecycle settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- License settings
- Local gateway settings
- Logging
- Machine learning settings
- Inference settings
- Monitoring settings
- Nodes
- Networking
- Node query cache settings
- Search settings
- Security settings
- Shard allocation, relocation, and recovery
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Advanced configuration
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Search your data
- Re-ranking
- Index modules
- Index templates
- Aliases
- Mapping
- Dynamic mapping
- Explicit mapping
- Runtime fields
- Field data types
- Aggregate metric
- Alias
- Arrays
- Binary
- Boolean
- Completion
- Date
- Date nanoseconds
- Dense vector
- Flattened
- Geopoint
- Geoshape
- Histogram
- IP
- Join
- Keyword
- Nested
- Numeric
- Object
- Pass-through object
- Percolator
- Point
- Range
- Rank feature
- Rank features
- Search-as-you-type
- Semantic text
- Shape
- Sparse vector
- Text
- Token count
- Unsigned long
- Version
- Metadata fields
- Mapping parameters
analyzer
coerce
copy_to
doc_values
dynamic
eager_global_ordinals
enabled
format
ignore_above
index.mapping.ignore_above
ignore_malformed
index
index_options
index_phrases
index_prefixes
meta
fields
normalizer
norms
null_value
position_increment_gap
properties
search_analyzer
similarity
store
subobjects
term_vector
- Mapping limit settings
- Removal of mapping types
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Attachment
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- Geo-grid
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- IP Location
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Redact
- Registered domain
- Remove
- Rename
- Reroute
- Script
- Set
- Set security user
- Sort
- Split
- Terminate
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Ingest pipelines in Search
- Connectors
- Data streams
- Data management
- ILM: Manage the index lifecycle
- Tutorial: Customize built-in policies
- Tutorial: Automate rollover
- Index management in Kibana
- Overview
- Concepts
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Data tiers
- Roll up or transform your data
- Query DSL
- EQL
- ES|QL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Frequent item sets
- Geo-distance
- Geohash grid
- Geohex grid
- Geotile grid
- Global
- Histogram
- IP prefix
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Random sampler
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Time series
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Change point
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- Geospatial analysis
- Watcher
- Monitor a cluster
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled automatically
- Manually configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- User profiles
- Realms
- Realm chains
- Security domains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- JWT authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Looking up users without authentication
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Role restriction
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Set up a cluster for high availability
- Optimizations
- Autoscaling
- Snapshot and restore
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Behavioral Analytics APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat component templates
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Prevalidate node removal
- Nodes reload secure settings
- Nodes stats
- Cluster Info
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Create or update desired nodes
- Get desired nodes
- Delete desired nodes
- Get desired balance
- Reset desired balance
- Cross-cluster replication APIs
- Connector APIs
- Create connector
- Delete connector
- Get connector
- List connectors
- Update connector API key id
- Update connector configuration
- Update connector index name
- Update connector features
- Update connector filtering
- Update connector name and description
- Update connector pipeline
- Update connector scheduling
- Update connector service type
- Create connector sync job
- Cancel connector sync job
- Delete connector sync job
- Get connector sync job
- List connector sync jobs
- Check in a connector
- Update connector error
- Update connector last sync stats
- Update connector status
- Check in connector sync job
- Claim connector sync job
- Set connector sync job error
- Set connector sync job stats
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- ES|QL APIs
- Features APIs
- Fleet APIs
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Resolve cluster
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Inference APIs
- Delete inference API
- Get inference API
- Perform inference API
- Create inference API
- Stream inference API
- Update inference API
- AlibabaCloud AI Search inference integration
- Amazon Bedrock inference integration
- Anthropic inference integration
- Azure AI studio inference integration
- Azure OpenAI inference integration
- Cohere inference integration
- Elasticsearch inference integration
- ELSER inference integration
- Google AI Studio inference integration
- Google Vertex AI inference integration
- HuggingFace inference integration
- Mistral inference integration
- OpenAI inference integration
- Watsonx inference integration
- Info API
- Ingest APIs
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Clear trained model deployment cache
- Create or update trained model aliases
- Create part of a trained model
- Create trained models
- Create trained model vocabulary
- Delete trained model aliases
- Delete trained models
- Get trained models
- Get trained models stats
- Infer trained model
- Start trained model deployment
- Stop trained model deployment
- Update trained model deployment
- Migration APIs
- Node lifecycle APIs
- Query rules APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Root API
- Script APIs
- Search APIs
- Search Application APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Bulk create or update roles API
- Bulk delete roles API
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Query Role
- Get service accounts
- Get service account credentials
- Get Security settings
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- Query User
- Update API key
- Update Security settings
- Bulk update API keys
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Activate user profile
- Disable user profile
- Enable user profile
- Get user profiles
- Suggest user profile
- Update user profile data
- Has privileges user profile
- Create Cross-Cluster API key
- Update Cross-Cluster API key
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Synonyms APIs
- Text structure APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- Troubleshooting
- Fix common cluster issues
- Diagnose unassigned shards
- Add a missing tier to the system
- Allow Elasticsearch to allocate the data in the system
- Allow Elasticsearch to allocate the index
- Indices mix index allocation filters with data tiers node roles to move through data tiers
- Not enough nodes to allocate all shard replicas
- Total number of shards for an index on a single node exceeded
- Total number of shards per node has been reached
- Troubleshooting corruption
- Fix data nodes out of disk
- Fix master nodes out of disk
- Fix other role nodes out of disk
- Start index lifecycle management
- Start Snapshot Lifecycle Management
- Restore from snapshot
- Troubleshooting broken repositories
- Addressing repeated snapshot policy failures
- Troubleshooting an unstable cluster
- Troubleshooting discovery
- Troubleshooting monitoring
- Troubleshooting transforms
- Troubleshooting Watcher
- Troubleshooting searches
- Troubleshooting shards capacity health issues
- Troubleshooting an unbalanced cluster
- Capture diagnostics
- Upgrade Elasticsearch
- Migration guide
- What’s new in 8.16
- Release notes
- Elasticsearch version 8.16.5
- Elasticsearch version 8.16.4
- Elasticsearch version 8.16.3
- Elasticsearch version 8.16.2
- Elasticsearch version 8.16.1
- Elasticsearch version 8.16.0
- Elasticsearch version 8.15.5
- Elasticsearch version 8.15.4
- Elasticsearch version 8.15.3
- Elasticsearch version 8.15.2
- Elasticsearch version 8.15.1
- Elasticsearch version 8.15.0
- Elasticsearch version 8.14.3
- Elasticsearch version 8.14.2
- Elasticsearch version 8.14.1
- Elasticsearch version 8.14.0
- Elasticsearch version 8.13.4
- Elasticsearch version 8.13.3
- Elasticsearch version 8.13.2
- Elasticsearch version 8.13.1
- Elasticsearch version 8.13.0
- Elasticsearch version 8.12.2
- Elasticsearch version 8.12.1
- Elasticsearch version 8.12.0
- Elasticsearch version 8.11.4
- Elasticsearch version 8.11.3
- Elasticsearch version 8.11.2
- Elasticsearch version 8.11.1
- Elasticsearch version 8.11.0
- Elasticsearch version 8.10.4
- Elasticsearch version 8.10.3
- Elasticsearch version 8.10.2
- Elasticsearch version 8.10.1
- Elasticsearch version 8.10.0
- Elasticsearch version 8.9.2
- Elasticsearch version 8.9.1
- Elasticsearch version 8.9.0
- Elasticsearch version 8.8.2
- Elasticsearch version 8.8.1
- Elasticsearch version 8.8.0
- Elasticsearch version 8.7.1
- Elasticsearch version 8.7.0
- Elasticsearch version 8.6.2
- Elasticsearch version 8.6.1
- Elasticsearch version 8.6.0
- Elasticsearch version 8.5.3
- Elasticsearch version 8.5.2
- Elasticsearch version 8.5.1
- Elasticsearch version 8.5.0
- Elasticsearch version 8.4.3
- Elasticsearch version 8.4.2
- Elasticsearch version 8.4.1
- Elasticsearch version 8.4.0
- Elasticsearch version 8.3.3
- Elasticsearch version 8.3.2
- Elasticsearch version 8.3.1
- Elasticsearch version 8.3.0
- Elasticsearch version 8.2.3
- Elasticsearch version 8.2.2
- Elasticsearch version 8.2.1
- Elasticsearch version 8.2.0
- Elasticsearch version 8.1.3
- Elasticsearch version 8.1.2
- Elasticsearch version 8.1.1
- Elasticsearch version 8.1.0
- Elasticsearch version 8.0.1
- Elasticsearch version 8.0.0
- Elasticsearch version 8.0.0-rc2
- Elasticsearch version 8.0.0-rc1
- Elasticsearch version 8.0.0-beta1
- Elasticsearch version 8.0.0-alpha2
- Elasticsearch version 8.0.0-alpha1
- Dependencies and versions
_source field
edit_source
field
editThe _source
field contains the original JSON document body that was passed
at index time. The _source
field itself is not indexed (and thus is not
searchable), but it is stored so that it can be returned when executing
fetch requests, like get or search.
If disk usage is important to you, then consider the following options:
-
Using synthetic
_source
, which reconstructs source content at the time of retrieval instead of storing it on disk. This shrinks disk usage, at the cost of slower access to_source
in Get and Search queries. -
Disabling the
_source
field completely. This shrinks disk usage but disables features that rely on_source
.
Synthetic _source
editSynthetic _source
is Generally Available only for TSDB indices
(indices that have index.mode
set to time_series
). For other indices,
synthetic _source
is in technical preview. Features in technical preview may
be changed or removed in a future release. Elastic will work to fix
any issues, but features in technical preview are not subject to the support SLA
of official GA features.
Though very handy to have around, the source field takes up a significant amount
of space on disk. Instead of storing source documents on disk exactly as you
send them, Elasticsearch can reconstruct source content on the fly upon retrieval.
Enable this by using the value synthetic
for the index setting index.mapping.source.mode
:
resp = client.indices.create( index="idx", settings={ "index": { "mapping": { "source": { "mode": "synthetic" } } } }, ) print(resp)
const response = await client.indices.create({ index: "idx", settings: { index: { mapping: { source: { mode: "synthetic", }, }, }, }, }); console.log(response);
PUT idx { "settings": { "index": { "mapping": { "source": { "mode": "synthetic" } } } } }
While this on the fly reconstruction is generally slower than saving the source
documents verbatim and loading them at query time, it saves a lot of storage
space. Additional latency can be avoided by not loading _source
field in queries when it is not needed.
Supported fields
editSynthetic _source
is supported by all field types. Depending on implementation details, field types have different
properties when used with synthetic _source
.
Most field types construct synthetic _source
using existing data, most
commonly doc_values
and stored fields. For these field types, no additional space
is needed to store the contents of _source
field. Due to the storage layout of doc_values
, the
generated _source
field undergoes modifications compared to the original document.
For all other field types, the original value of the field is stored as is, in the same way as the _source
field in
non-synthetic mode. In this case there are no modifications and field data in _source
is the same as in the original
document. Similarly, malformed values of fields that use ignore_malformed
or
ignore_above
need to be stored as is. This approach is less storage efficient since data needed for
_source
reconstruction is stored in addition to other data required to index the field (like doc_values
).
Synthetic _source
restrictions
editSome field types have additional restrictions. These restrictions are documented in the synthetic _source
section
of the field type’s documentation.
Synthetic _source
modifications
editWhen synthetic _source
is enabled, retrieved documents undergo some
modifications compared to the original JSON.
Arrays moved to leaf fields
editSynthetic _source
arrays are moved to leaves. For example:
resp = client.index( index="idx", id="1", document={ "foo": [ { "bar": 1 }, { "bar": 2 } ] }, ) print(resp)
response = client.index( index: 'idx', id: 1, body: { foo: [ { bar: 1 }, { bar: 2 } ] } ) puts response
const response = await client.index({ index: "idx", id: 1, document: { foo: [ { bar: 1, }, { bar: 2, }, ], }, }); console.log(response);
PUT idx/_doc/1 { "foo": [ { "bar": 1 }, { "bar": 2 } ] }
Will become:
{ "foo": { "bar": [1, 2] } }
This can cause some arrays to vanish:
resp = client.index( index="idx", id="1", document={ "foo": [ { "bar": 1 }, { "baz": 2 } ] }, ) print(resp)
response = client.index( index: 'idx', id: 1, body: { foo: [ { bar: 1 }, { baz: 2 } ] } ) puts response
const response = await client.index({ index: "idx", id: 1, document: { foo: [ { bar: 1, }, { baz: 2, }, ], }, }); console.log(response);
PUT idx/_doc/1 { "foo": [ { "bar": 1 }, { "baz": 2 } ] }
Will become:
{ "foo": { "bar": 1, "baz": 2 } }
Fields named as they are mapped
editSynthetic source names fields as they are named in the mapping. When used
with dynamic mapping, fields with dots (.
) in their names are, by
default, interpreted as multiple objects, while dots in field names are
preserved within objects that have subobjects
disabled. For example:
resp = client.index( index="idx", id="1", document={ "foo.bar.baz": 1 }, ) print(resp)
const response = await client.index({ index: "idx", id: 1, document: { "foo.bar.baz": 1, }, }); console.log(response);
PUT idx/_doc/1 { "foo.bar.baz": 1 }
Will become:
{ "foo": { "bar": { "baz": 1 } } }
This impacts how source contents can be referenced in scripts. For instance, referencing a script in its original source form will return null:
"script": { "source": """ emit(params._source['foo.bar.baz']) """ }
Instead, source references need to be in line with the mapping structure:
"script": { "source": """ emit(params._source['foo']['bar']['baz']) """ }
or simply
"script": { "source": """ emit(params._source.foo.bar.baz) """ }
The following field APIs are preferable as, in addition to being agnostic to the mapping structure, they make use of docvalues if available and fall back to synthetic source only when needed. This reduces source synthesizing, a slow and costly operation.
"script": { "source": """ emit(field('foo.bar.baz').get(null)) """ } "script": { "source": """ emit($('foo.bar.baz', null)) """ }
Alphabetical sorting
editSynthetic _source
fields are sorted alphabetically. The
JSON RFC defines objects as
"an unordered collection of zero or more name/value pairs" so applications
shouldn’t care but without synthetic _source
the original ordering is
preserved and some applications may, counter to the spec, do something with
that ordering.
Representation of ranges
editRange field values (e.g. long_range
) are always represented as inclusive on both sides with bounds adjusted
accordingly. See examples.
Reduced precision of geo_point
values
editValues of geo_point
fields are represented in synthetic _source
with reduced precision. See
examples.
Minimizing source modifications
editIt is possible to avoid synthetic source modifications for a particular object or field, at extra storage cost.
This is controlled through param synthetic_source_keep
with the following option:
-
none
: synthetic source diverges from the original source as described above (default). -
arrays
: arrays of the corresponding field or object preserve the original element ordering and duplicate elements. The synthetic source fragment for such arrays is not guaranteed to match the original source exactly, e.g. array[1, 2, [5], [[4, [3]]], 5]
may appear as-is or in an equivalent format like[1, 2, 5, 4, 3, 5]
. The exact format may change in the future, in an effort to reduce the storage overhead of this option. -
all
: the source for both singleton instances and arrays of the corresponding field or object gets recorded. When applied to objects, the source of all sub-objects and sub-fields gets captured. Furthermore, the original source of arrays gets captured and appears in synthetic source with no modifications.
For instance:
resp = client.indices.create( index="idx_keep", settings={ "index": { "mapping": { "source": { "mode": "synthetic" } } } }, mappings={ "properties": { "path": { "type": "object", "synthetic_source_keep": "all" }, "ids": { "type": "integer", "synthetic_source_keep": "arrays" } } }, ) print(resp)
const response = await client.indices.create({ index: "idx_keep", settings: { index: { mapping: { source: { mode: "synthetic", }, }, }, }, mappings: { properties: { path: { type: "object", synthetic_source_keep: "all", }, ids: { type: "integer", synthetic_source_keep: "arrays", }, }, }, }); console.log(response);
PUT idx_keep { "settings": { "index": { "mapping": { "source": { "mode": "synthetic" } } } }, "mappings": { "properties": { "path": { "type": "object", "synthetic_source_keep": "all" }, "ids": { "type": "integer", "synthetic_source_keep": "arrays" } } } }
resp = client.index( index="idx_keep", id="1", document={ "path": { "to": [ { "foo": [ 3, 2, 1 ] }, { "foo": [ 30, 20, 10 ] } ], "bar": "baz" }, "ids": [ 200, 100, 300, 100 ] }, ) print(resp)
const response = await client.index({ index: "idx_keep", id: 1, document: { path: { to: [ { foo: [3, 2, 1], }, { foo: [30, 20, 10], }, ], bar: "baz", }, ids: [200, 100, 300, 100], }, }); console.log(response);
PUT idx_keep/_doc/1 { "path": { "to": [ { "foo": [3, 2, 1] }, { "foo": [30, 20, 10] } ], "bar": "baz" }, "ids": [ 200, 100, 300, 100 ] }
returns the original source, with no array deduplication and sorting:
{ "path": { "to": [ { "foo": [3, 2, 1] }, { "foo": [30, 20, 10] } ], "bar": "baz" }, "ids": [ 200, 100, 300, 100 ] }
The option for capturing the source of arrays can be applied at index level, by setting
index.mapping.synthetic_source_keep
to arrays
. This applies to all objects and fields in the index, except for
the ones with explicit overrides of synthetic_source_keep
set to none
. In this case, the storage overhead grows
with the number and sizes of arrays present in source of each document, naturally.
Field types that support synthetic source with no storage overhead
editThe following field types support synthetic source using data from doc_values
or
<stored-fields, stored fields>>, and require no additional storage space to construct the _source
field.
If you enable the ignore_malformed
or ignore_above
settings, then
additional storage is required to store ignored field values for these types.
Disabling the _source
field
editThough very handy to have around, the source field does incur storage overhead within the index. For this reason, it can be disabled as follows:
resp = client.indices.create( index="my-index-000001", mappings={ "_source": { "enabled": False } }, ) print(resp)
response = client.indices.create( index: 'my-index-000001', body: { mappings: { _source: { enabled: false } } } ) puts response
const response = await client.indices.create({ index: "my-index-000001", mappings: { _source: { enabled: false, }, }, }); console.log(response);
PUT my-index-000001 { "mappings": { "_source": { "enabled": false } } }
Think before disabling the _source
field
Users often disable the _source
field without thinking about the
consequences, and then live to regret it. If the _source
field isn’t
available then a number of features are not supported:
-
The
update
,update_by_query
, andreindex
APIs. - In the Kibana Discover application, field data will not be displayed.
- On the fly highlighting.
- The ability to reindex from one Elasticsearch index to another, either to change mappings or analysis, or to upgrade an index to a new major version.
- The ability to debug queries or aggregations by viewing the original document used at index time.
- Potentially in the future, the ability to repair index corruption automatically.
If disk space is a concern, rather increase the
compression level instead of disabling the _source
.
Including / Excluding fields from _source
editAn expert-only feature is the ability to prune the contents of the _source
field after the document has been indexed, but before the _source
field is
stored.
Removing fields from the _source
has similar downsides to disabling
_source
, especially the fact that you cannot reindex documents from one
Elasticsearch index to another. Consider using
source filtering instead.
The includes
/excludes
parameters (which also accept wildcards) can be used
as follows:
resp = client.indices.create( index="logs", mappings={ "_source": { "includes": [ "*.count", "meta.*" ], "excludes": [ "meta.description", "meta.other.*" ] } }, ) print(resp) resp1 = client.index( index="logs", id="1", document={ "requests": { "count": 10, "foo": "bar" }, "meta": { "name": "Some metric", "description": "Some metric description", "other": { "foo": "one", "baz": "two" } } }, ) print(resp1) resp2 = client.search( index="logs", query={ "match": { "meta.other.foo": "one" } }, ) print(resp2)
response = client.indices.create( index: 'logs', body: { mappings: { _source: { includes: [ '*.count', 'meta.*' ], excludes: [ 'meta.description', 'meta.other.*' ] } } } ) puts response response = client.index( index: 'logs', id: 1, body: { requests: { count: 10, foo: 'bar' }, meta: { name: 'Some metric', description: 'Some metric description', other: { foo: 'one', baz: 'two' } } } ) puts response response = client.search( index: 'logs', body: { query: { match: { 'meta.other.foo' => 'one' } } } ) puts response
const response = await client.indices.create({ index: "logs", mappings: { _source: { includes: ["*.count", "meta.*"], excludes: ["meta.description", "meta.other.*"], }, }, }); console.log(response); const response1 = await client.index({ index: "logs", id: 1, document: { requests: { count: 10, foo: "bar", }, meta: { name: "Some metric", description: "Some metric description", other: { foo: "one", baz: "two", }, }, }, }); console.log(response1); const response2 = await client.search({ index: "logs", query: { match: { "meta.other.foo": "one", }, }, }); console.log(response2);
PUT logs { "mappings": { "_source": { "includes": [ "*.count", "meta.*" ], "excludes": [ "meta.description", "meta.other.*" ] } } } PUT logs/_doc/1 { "requests": { "count": 10, "foo": "bar" }, "meta": { "name": "Some metric", "description": "Some metric description", "other": { "foo": "one", "baz": "two" } } } GET logs/_search { "query": { "match": { "meta.other.foo": "one" } } }
On this page
- Synthetic
_source
- Supported fields
- Synthetic
_source
restrictions - Synthetic
_source
modifications - Arrays moved to leaf fields
- Fields named as they are mapped
- Alphabetical sorting
- Representation of ranges
- Reduced precision of
geo_point
values - Minimizing source modifications
- Field types that support synthetic source with no storage overhead
- Disabling the
_source
field - Including / Excluding fields from
_source