- Elasticsearch Guide: other versions:
- Getting Started
- Setup
- Breaking changes
- Breaking changes in 2.0
- Removed features
- Network changes
- Multiple
path.data
striping - Mapping changes
- CRUD and routing changes
- Query DSL changes
- Search changes
- Aggregation changes
- Parent/Child changes
- Scripting changes
- Index API changes
- Snapshot and Restore changes
- Plugin and packaging changes
- Setting changes
- Stats, info, and
cat
changes - Java API changes
- Breaking changes in 2.0
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Bucket Aggregations
- Children Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IPv4 Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Warmers
- Shadow replica indices
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Optimize
- Upgrade
- cat APIs
- Cluster APIs
- Query DSL
- Mapping
- Field datatypes
- Meta-Fields
- Mapping parameters
analyzer
boost
coerce
copy_to
doc_values
dynamic
enabled
fielddata
format
geohash
geohash_precision
geohash_prefix
ignore_above
ignore_malformed
include_in_all
index
index_options
lat_lon
fields
norms
null_value
position_increment_gap
precision_step
properties
search_analyzer
similarity
store
term_vector
- Dynamic Mapping
- Transform
- Analysis
- Analyzers
- Tokenizers
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Compound Word Token Filter
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Classic Token Filter
- Apostrophe Token Filter
- Character Filters
- ICU Analysis Plugin
- Modules
- Index Modules
- Testing
- Glossary of terms
- Release Notes
WARNING: Version 2.0 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Token count datatype
editToken count datatype
editA field of type token_count
is really an integer
field which
accepts string values, analyzes them, then indexes the number of tokens in the
string.
For instance:
PUT my_index { "mappings": { "my_type": { "properties": { "name": { "type": "string", "fields": { "length": { "type": "token_count", "analyzer": "standard" } } } } } } } PUT my_index/my_type/1 { "name": "John Smith" } PUT my_index/my_type/2 { "name": "Rachel Alice Williams" } GET my_index/_search { "query": { "term": { "name.length": 3 } } }
The |
|
The |
|
This query matches only the document containing |
Technically the token_count
type sums position increments rather than
counting tokens. This means that even if the analyzer filters out stop
words they are included in the count.
Parameters for token_count
fields
editThe following parameters are accepted by token_count
fields:
The analyzer which should be used to analyze the string value. Required. For best performance, use an analyzer without token filters. |
|
Field-level index time boosting. Accepts a floating point number, defaults
to |
|
Should the field be stored on disk in a column-stride fashion, so that it
can later be used for sorting, aggregations, or scripting? Accepts |
|
Should the field be searchable? Accepts |
|
Whether or not the field value should be included in the
|
|
Accepts a numeric value of the same |
|
Controls the number of extra terms that are indexed to make
|
|
Whether the field value should be stored and retrievable separately from
the |
On this page