Perform streaming inference Added in 8.16.0

POST /_inference/{task_type}/{inference_id}/_stream

Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type.

IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

This API requires the monitor_inference cluster privilege (the built-in inference_admin and inference_user roles grant this privilege). You must use a client that supports streaming.

Path parameters

  • task_type string Required

    The type of task that the model performs.

    Values are sparse_embedding, text_embedding, rerank, or completion.

  • inference_id string Required

    The unique identifier for the inference endpoint.

application/json

Body

  • input string | array[string] Required

    The text on which you want to perform the inference task. It can be a single string or an array.

    NOTE: Inference endpoints for the completion task type currently only support a single string as input.

Responses

  • 200 application/json

    Additional properties are allowed.

POST /_inference/{task_type}/{inference_id}/_stream
curl \
 -X POST http://api.example.com/_inference/{task_type}/{inference_id}/_stream \
 -H "Content-Type: application/json" \
 -d '{"input":"What is Elastic?"}'
Request example
Run `POST _inference/completion/openai-completion/_stream` to perform a completion on the example question with streaming.
{
  "input": "What is Elastic?"
}
Response examples (200)
{}