Get started

edit

[preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

Follow along to set up your Elasticsearch project and get started with some sample documents. Then, choose how to continue with your own data.

Create project

edit

Use your Elastic Cloud account to create a fully-managed Elasticsearch project:

  1. Navigate to cloud.elastic.co and create a new account or log in to your existing account.
  2. Within Serverless Projects, choose Create project.
  3. Choose the Elasticsearch project type.
  4. Select a configuration for your project, based on your use case.

    • General purpose. For general search use cases across various data types.
    • Optimized for Vectors. For search use cases using vectors and near real-time retrieval.
  5. Provide a name for the project and optionally edit the project settings, such as the cloud platform region. Select Create project to continue.
  6. Once the project is ready, select Continue.

You should now see Get started with Elasticsearch, and you’re ready to continue.

Minimum runtime VCUs

When you create an Elasticsearch Serverless project, a minimum number of VCUs are always allocated to your project to maintain basic capabilities. These VCUs are used for the following purposes:

  • Ingest: Ensure constant availability for ingesting data into your project (4 VCUs).
  • Search: Maintain a data cache and support low latency searches (8 VCUs).

These minimum VCUs are billed at the standard rate per VCU hour, incurring a minimum cost even when you’re not actively using your project. Learn more about minimum VCUs on Elasticsearch Serverless.

Create API key

edit

Create an API key, which will enable you to access the Elasticsearch API to ingest and search data.

  1. Scroll to Add an API Key and select New.
  2. In Create API Key, enter a name for your key and (optionally) set an expiration date.
  3. (Optional) Under Control Security privileges, you can set specific access permissions for this API key. By default, it has full access to all APIs.
  4. (Optional) The Add metadata section allows you to add custom key-value pairs to help identify and organize your API keys.
  5. Select Create API Key to finish.

After creation, you’ll see your API key displayed as an encoded string. Store this encoded API key securely. It is displayed only once and cannot be retrieved later. You will use this encoded API key when sending API requests.

You can’t recover or retrieve a lost API key. Instead, you must delete the key and create a new one.

Copy URL

edit

Next, copy the URL of your API endpoint. You’ll send all Elasticsearch API requests to this URL.

  1. Scroll to Copy your connection details.
  2. Find the value for Elasticsearch Endpoint.

Store this value along with your encoded API key. You’ll use both values in the next step.

Test connection

edit

We’ll use the curl command to test your connection and make additional API requests. (See Install curl if you need to install this program.)

curl will need access to your Elasticsearch Endpoint and encoded API key. Within your terminal, assign these values to the ES_URL and API_KEY environment variables.

For example:

export ES_URL="https://dda7de7f1d264286a8fc9741c7741690.es.us-east-1.aws.elastic.cloud:443"
export API_KEY="ZFZRbF9Jb0JDMEoxaVhoR2pSa3Q6dExwdmJSaldRTHFXWEp4TFFlR19Hdw=="

Then run the following command to test your connection:

curl "${ES_URL}" \
  -H "Authorization: ApiKey ${API_KEY}" \
  -H "Content-Type: application/json"

You should receive a response similar to the following:

{
  "name" : "serverless",
  "cluster_name" : "dda7de7f1d264286a8fc9741c7741690",
  "cluster_uuid" : "ws0IbTBUQfigmYAVMztkZQ",
  "version" : { ... },
  "tagline" : "You Know, for Search"
}

Now you’re ready to ingest and search some sample documents.

Ingest data

edit

This example uses Elasticsearch APIs to ingest data. If you’d prefer to upload a file using the UI, refer to Upload a file.

To ingest data, you must create an index and store some documents. This process is also called "indexing".

You can index multiple documents using a single POST request to the _bulk API endpoint. The request body specifies the documents to store and the indices in which to store them.

Elasticsearch will automatically create the index and map each document value to one of its data types. Include the ?pretty option to receive a human-readable response.

Run the following command to index some sample documents into the books index:

curl -X POST "${ES_URL}/_bulk?pretty" \
  -H "Authorization: ApiKey ${API_KEY}" \
  -H "Content-Type: application/json" \
  -d '
{ "index" : { "_index" : "books" } }
{"name": "Snow Crash", "author": "Neal Stephenson", "release_date": "1992-06-01", "page_count": 470}
{ "index" : { "_index" : "books" } }
{"name": "Revelation Space", "author": "Alastair Reynolds", "release_date": "2000-03-15", "page_count": 585}
{ "index" : { "_index" : "books" } }
{"name": "1984", "author": "George Orwell", "release_date": "1985-06-01", "page_count": 328}
{ "index" : { "_index" : "books" } }
{"name": "Fahrenheit 451", "author": "Ray Bradbury", "release_date": "1953-10-15", "page_count": 227}
{ "index" : { "_index" : "books" } }
{"name": "Brave New World", "author": "Aldous Huxley", "release_date": "1932-06-01", "page_count": 268}
{ "index" : { "_index" : "books" } }
{"name": "The Handmaids Tale", "author": "Margaret Atwood", "release_date": "1985-06-01", "page_count": 311}
'

You should receive a response indicating there were no errors:

{
  "errors" : false,
  "took" : 1260,
  "items" : [ ... ]
}

Search data

edit

To search, send a POST request to the _search endpoint, specifying the index to search. Use the Elasticsearch query DSL to construct your request body.

Run the following command to search the books index for documents containing snow:

curl -X POST "${ES_URL}/books/_search?pretty" \
  -H "Authorization: ApiKey ${API_KEY}" \
  -H "Content-Type: application/json" \
  -d '
{
  "query": {
    "query_string": {
      "query": "snow"
    }
  }
}
'

You should receive a response with the results:

{
  "took" : 24,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "max_score" : 1.5904956,
    "hits" : [
      {
        "_index" : "books",
        "_id" : "Z3hf_IoBONQ5TXnpLdlY",
        "_score" : 1.5904956,
        "_source" : {
          "name" : "Snow Crash",
          "author" : "Neal Stephenson",
          "release_date" : "1992-06-01",
          "page_count" : 470
        }
      }
    ]
  }
}

Continue on your own

edit

Congratulations! You’ve set up an Elasticsearch project, and you’ve ingested and searched some sample data. Now you’re ready to continue on your own.

Explore
edit

Want to explore the sample documents or your own data?

By creating a data view, you can explore data using several UI tools, such as Discover or Dashboards. Or, use Elasticsearch aggregations to explore your data using the API. Find more information in Explore your data.

Build
edit

Ready to build your own solution?

To learn more about sending and syncing data to Elasticsearch, or the search API and its query DSL, check Ingest your data and REST APIs.