Ingest pipelines

edit

[preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

This content applies to: Elasticsearch Observability Security

Ingest pipelines let you perform common transformations on your data before indexing. For example, you can use pipelines to remove fields, extract values from text, and enrich your data.

A pipeline consists of a series of configurable tasks called processors. Each processor runs sequentially, making specific changes to incoming documents. After the processors have run, Elasticsearch adds the transformed documents to your data stream or index.

Create and manage pipelines
edit

In Project settings → Management → Ingest Pipelines, you can:

  • View a list of your pipelines and drill down into details
  • Edit or clone existing pipelines
  • Delete pipelines
Ingest Pipelines

To create a pipeline, click Create pipeline → New pipeline. For an example tutorial, see Example: Parse logs.

The New pipeline from CSV option lets you use a file with comma-separated values (CSV) to create an ingest pipeline that maps custom data to the Elastic Common Schema (ECS). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other data sets. To get started, check Map custom data to ECS.

Test pipelines
edit

Before you use a pipeline in production, you should test it using sample documents. When creating or editing a pipeline in Ingest Pipelines, click Add documents. In the Documents tab, provide sample documents and click Run the pipeline:

Test a pipeline in Ingest Pipelines