Elasticsearch billing dimensions

edit

Elasticsearch billing dimensions

edit

[preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

Elasticsearch is priced based on the consumption of the underlying infrastructure used to support your use case, with the performance characteristics you need. We measure by Virtual Compute Units (VCUs), which is a slice of RAM, CPU and local disk for caching. The number of VCUs required will depend on the amount and the rate of data sent to Elasticsearch and retained, and the number of searches and latency you require for searches. In addition, if you required machine learning for inference or NLP tasks, those VCUs are also metered and billed.

Minimum runtime VCUs

When you create an Elasticsearch Serverless project, a minimum number of VCUs are always allocated to your project to maintain basic capabilities. These VCUs are used for the following purposes:

  • Ingest: Ensure constant availability for ingesting data into your project (4 VCUs).
  • Search: Maintain a data cache and support low latency searches (8 VCUs).

These minimum VCUs are billed at the standard rate per VCU hour, incurring a minimum cost even when you’re not actively using your project. Learn more about minimum VCUs on Elasticsearch Serverless.

Information about the VCU types (Search, Ingest, and ML)

edit

There are three VCU types in Elasticsearch:

  • Indexing — The VCUs used to index the incoming documents to be stored in Elasticsearch.
  • Search — The VCUs used to return search results with the latency and Queries per Second (QPS) you require.
  • Machine Learning — The VCUs used to perform inference, NLP tasks, and other ML activities.

Information about the Search AI Lake dimension (GB)

edit

For Elasticsearch, the Search AI Lake is where data is stored and retained. This is charged in GBs for the size of data at rest. Depending on the enrichment, vectorization and other activities during ingest, this size may be different from the original size of the source data.

Managing Elasticsearch costs

edit

You can control costs in a number of ways. Firstly there is the amount of data that is retained. Elasticsearch will ensure that the most recent data is cached, allowing for fast retrieval. Reducing the amount of data means fewer Search VCUs may be required. If you need lower latency, then more Search VCUs can be added by adjusting the Search Power. A further refinement is for data streams that can be used to store time series data. For that type of data, you can further define the number of days of data you want cacheable, which will affect the number of Search VCUs and therefore the cost. Note that Elasticsearch Serverless maintains and bills for minimum compute resource Ingest and Search VCUs.

For detailed Elasticsearch serverless project rates, check the Elasticsearch Serverless pricing page.