Classic token filter
editClassic token filter
editPerforms optional post-processing of terms generated by the
classic
tokenizer.
This filter removes the english possessive ('s
) from the end of words and
removes dots from acronyms. It uses Lucene’s
ClassicFilter.
Example
editThe following analyze API request demonstrates how the classic token filter works.
resp = client.indices.analyze( tokenizer="classic", filter=[ "classic" ], text="The 2 Q.U.I.C.K. Brown-Foxes jumped over the lazy dog's bone.", ) print(resp)
response = client.indices.analyze( body: { tokenizer: 'classic', filter: [ 'classic' ], text: "The 2 Q.U.I.C.K. Brown-Foxes jumped over the lazy dog's bone." } ) puts response
const response = await client.indices.analyze({ tokenizer: "classic", filter: ["classic"], text: "The 2 Q.U.I.C.K. Brown-Foxes jumped over the lazy dog's bone.", }); console.log(response);
GET /_analyze { "tokenizer" : "classic", "filter" : ["classic"], "text" : "The 2 Q.U.I.C.K. Brown-Foxes jumped over the lazy dog's bone." }
The filter produces the following tokens:
[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog, bone ]
Add to an analyzer
editThe following create index API request uses the classic token filter to configure a new custom analyzer.
resp = client.indices.create( index="classic_example", settings={ "analysis": { "analyzer": { "classic_analyzer": { "tokenizer": "classic", "filter": [ "classic" ] } } } }, ) print(resp)
response = client.indices.create( index: 'classic_example', body: { settings: { analysis: { analyzer: { classic_analyzer: { tokenizer: 'classic', filter: [ 'classic' ] } } } } } ) puts response
const response = await client.indices.create({ index: "classic_example", settings: { analysis: { analyzer: { classic_analyzer: { tokenizer: "classic", filter: ["classic"], }, }, }, }, }); console.log(response);
PUT /classic_example { "settings": { "analysis": { "analyzer": { "classic_analyzer": { "tokenizer": "classic", "filter": [ "classic" ] } } } } }