Semantic text field type
editSemantic text field type
editThis functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
The semantic_text
field type automatically generates embeddings for text content using an inference endpoint.
Long passages are automatically chunked to smaller sections to enable the processing of larger corpuses of text.
The semantic_text
field type specifies an inference endpoint identifier that will be used to generate embeddings.
You can create the inference endpoint by using the Create inference API.
This field type and the semantic
query type make it simpler to perform semantic search on your data.
Using semantic_text
, you won’t need to specify how to generate embeddings for
your data, or how to index it. The inference endpoint automatically determines
the embedding generation, indexing, and query to use.
resp = client.indices.create( index="my-index-000001", mappings={ "properties": { "inference_field": { "type": "semantic_text", "inference_id": "my-elser-endpoint" } } }, ) print(resp)
PUT my-index-000001 { "mappings": { "properties": { "inference_field": { "type": "semantic_text", "inference_id": "my-elser-endpoint" } } } }
Parameters for semantic_text
fields
edit-
inference_id
- (Required, string) Inference endpoint that will be used to generate the embeddings for the field. Use the Create inference API to create the endpoint.
Inference endpoint validation
editThe inference_id
will not be validated when the mapping is created, but when documents are ingested into the index.
When the first document is indexed, the inference_id
will be used to generate underlying indexing structures for the field.
Removing an inference endpoint will cause ingestion of documents and semantic queries to fail on indices that define semantic_text
fields with that inference endpoint as their inference_id
.
Before removal, check if inference endpoints are used in semantic_text
fields.
Automatic text chunking
editInference endpoints have a limit on the amount of text they can process.
To allow for large amounts of text to be used in semantic search, semantic_text
automatically generates smaller passages if needed, called chunks.
Each chunk will include the text subpassage and the corresponding embedding generated from it. When querying, the individual passages will be automatically searched for each document, and the most relevant passage will be used to compute a score.
Documents are split into 250-word sections with a 100-word overlap so that each section shares 100 words with the previous section. This overlap ensures continuity and prevents vital contextual information in the input text from being lost by a hard break.
semantic_text
structure
editOnce a document is ingested, a semantic_text
field will have the following structure:
"inference_field": { "text": "these are not the droids you're looking for", "inference": { "inference_id": "my-elser-endpoint", "model_settings": { "task_type": "sparse_embedding" }, "chunks": [ { "text": "these are not the droids you're looking for", "embeddings": { (...) } } ] } }
The field will become an object structure to accommodate both the original text and the inference results. |
|
The |
|
Model settings, including the task type and dimensions/similarity if applicable. |
|
Inference results will be grouped in chunks, each with its corresponding text and embeddings. |
Refer to this tutorial to learn more about
semantic search using semantic_text
and the semantic
query.
Customizing semantic_text
indexing
editsemantic_text
uses defaults for indexing data based on the inference endpoint
specified. It enables you to quickstart your semantic search by providing
automatic inference and a dedicated query so you don’t need to provide further
details.
In case you want to customize data indexing, use the
sparse_vector
or dense_vector
field
types and create an ingest pipeline with an
inference processor to generate the embeddings.
This tutorial walks you through the process. In
these cases - when you use sparse_vector
or dense_vector
field types instead
of the semantic_text
field type to customize indexing - using the
semantic_query
is not supported for querying the
field data.
Updates to semantic_text
fields
editUpdates that use scripts are not supported for an index contains a semantic_text
field.
Even if the script targets non-semantic_text
fields, the update will fail when the index contains a semantic_text
field.
copy_to
support
editThe semantic_text
field type can be the target of
copy_to
fields. This means you can use a single semantic_text
field to collect the values of other fields for semantic search. Each value has
its embeddings calculated separately; each field value is a separate set of chunk(s) in
the resulting embeddings.
This imposes a restriction on bulk requests and ingestion pipelines that update documents with semantic_text
fields.
In these cases, all fields that are copied to a semantic_text
field, including the semantic_text
field value, must have a value to ensure every embedding is calculated correctly.
For example, the following mapping:
resp = client.indices.create( index="test-index", mappings={ "properties": { "infer_field": { "type": "semantic_text", "inference_id": "my-elser-endpoint" }, "source_field": { "type": "text", "copy_to": "infer_field" } } }, ) print(resp)
const response = await client.indices.create({ index: "test-index", mappings: { properties: { infer_field: { type: "semantic_text", inference_id: "my-elser-endpoint", }, source_field: { type: "text", copy_to: "infer_field", }, }, }, }); console.log(response);
PUT test-index { "mappings": { "properties": { "infer_field": { "type": "semantic_text", "inference_id": "my-elser-endpoint" }, "source_field": { "type": "text", "copy_to": "infer_field" } } } }
Will need the following bulk update request to ensure that infer_field
is updated correctly:
resp = client.bulk( index="test-index", operations=[ { "update": { "_id": "1" } }, { "doc": { "infer_field": "updated inference field", "source_field": "updated source field" } } ], ) print(resp)
const response = await client.bulk({ index: "test-index", operations: [ { update: { _id: "1", }, }, { doc: { infer_field: "updated inference field", source_field: "updated source field", }, }, ], }); console.log(response);
PUT test-index/_bulk {"update": {"_id": "1"}} {"doc": {"infer_field": "updated inference field", "source_field": "updated source field"}}
Notice that both the semantic_text
field and the source field are updated in the bulk request.
Limitations
editsemantic_text
field types have the following limitations:
-
semantic_text
fields are not currently supported as elements of nested fields. -
semantic_text
fields can’t currently be set as part of Dynamic templates. -
semantic_text
fields can’t be defined as multi-fields of another field, nor can they contain other fields as multi-fields.