Evaluate ranked search results Added in 6.2.0

GET /_rank_eval

Evaluate the quality of ranked search results over a set of typical search queries.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

  • If true, missing or closed indices are not included in the response.

  • Search operation type

application/json

Body Required

  • requests array[object] Required

    A set of typical search requests, together with their provided ratings.

    Hide requests attributes Show requests attributes object
    • id string Required
    • request object

      Additional properties are allowed.

      Hide request attributes Show request attributes object
    • ratings array[object] Required

      List of document ratings

      Hide ratings attributes Show ratings attributes object
      • _id string Required
      • _index string Required
      • rating number Required

        The document’s relevance with regard to this search request.

    • params object

      The search template parameters.

      Hide params attribute Show params attribute object
      • * object Additional properties

        Additional properties are allowed.

  • metric object

    Additional properties are allowed.

    Hide metric attributes Show metric attributes object
    • Additional properties are allowed.

      Hide precision attributes Show precision attributes object
      • k number

        Sets the maximum number of documents retrieved per query. This value will act in place of the usual size parameter in the query.

      • Sets the rating threshold above which documents are considered to be "relevant".

      • Controls how unlabeled documents in the search results are counted. If set to true, unlabeled documents are ignored and neither count as relevant or irrelevant. Set to false (the default), they are treated as irrelevant.

    • recall object

      Additional properties are allowed.

      Hide recall attributes Show recall attributes object
      • k number

        Sets the maximum number of documents retrieved per query. This value will act in place of the usual size parameter in the query.

      • Sets the rating threshold above which documents are considered to be "relevant".

    • Additional properties are allowed.

      Hide mean_reciprocal_rank attributes Show mean_reciprocal_rank attributes object
      • k number

        Sets the maximum number of documents retrieved per query. This value will act in place of the usual size parameter in the query.

      • Sets the rating threshold above which documents are considered to be "relevant".

    • dcg object

      Additional properties are allowed.

      Hide dcg attributes Show dcg attributes object
      • k number

        Sets the maximum number of documents retrieved per query. This value will act in place of the usual size parameter in the query.

      • normalize boolean

        If set to true, this metric will calculate the Normalized DCG.

    • Additional properties are allowed.

      Hide expected_reciprocal_rank attributes Show expected_reciprocal_rank attributes object
      • k number

        Sets the maximum number of documents retrieved per query. This value will act in place of the usual size parameter in the query.

      • maximum_relevance number Required

        The highest relevance grade used in the user-supplied relevance judgments.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • metric_score number Required

      The overall evaluation quality calculated by the defined metric

    • details object Required

      The details section contains one entry for every query in the original requests section, keyed by the search request id

      Hide details attribute Show details attribute object
      • * object Additional properties

        Additional properties are allowed.

        Hide * attributes Show * attributes object
        • metric_score number Required

          The metric_score in the details section shows the contribution of this query to the global quality metric score

        • unrated_docs array[object] Required

          The unrated_docs section contains an _index and _id entry for each document in the search result for this query that didn’t have a ratings value. This can be used to ask the user to supply ratings for these documents

          Hide unrated_docs attributes Show unrated_docs attributes object
        • hits array[object] Required

          The hits section shows a grouping of the search results with their supplied ratings

          Hide hits attributes Show hits attributes object
        • metric_details object Required

          The metric_details give additional information about the calculated quality metric (e.g. how many of the retrieved documents were relevant). The content varies for each metric but allows for better interpretation of the results

          Hide metric_details attribute Show metric_details attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • * object Additional properties

              Additional properties are allowed.

    • failures object Required
      Hide failures attribute Show failures attribute object
      • * object Additional properties

        Additional properties are allowed.

GET /_rank_eval
curl \
 -X GET http://api.example.com/_rank_eval \
 -H "Content-Type: application/json" \
 -d '{"requests":[{"id":"string","request":{"query":{},"size":42.0},"ratings":[{"_id":"string","_index":"string","rating":42.0}],"template_id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}}}],"metric":{"":{"k":42.0,"maximum_relevance":42.0}}}'
Request examples
{
  "requests": [
    {
      "id": "string",
      "request": {
        "query": {},
        "size": 42.0
      },
      "ratings": [
        {
          "_id": "string",
          "_index": "string",
          "rating": 42.0
        }
      ],
      "template_id": "string",
      "params": {
        "additionalProperty1": {},
        "additionalProperty2": {}
      }
    }
  ],
  "metric": {
    "": {
      "k": 42.0,
      "maximum_relevance": 42.0
    }
  }
}
Response examples (200)
{
  "metric_score": 42.0,
  "details": {
    "additionalProperty1": {
      "metric_score": 42.0,
      "unrated_docs": [
        {
          "_id": "string",
          "_index": "string"
        }
      ],
      "hits": [
        {
          "hit": {
            "_id": "string",
            "_index": "string",
            "_score": 42.0
          },
          "rating": 42.0
        }
      ],
      "metric_details": {
        "additionalProperty1": {
          "additionalProperty1": {},
          "additionalProperty2": {}
        },
        "additionalProperty2": {
          "additionalProperty1": {},
          "additionalProperty2": {}
        }
      }
    },
    "additionalProperty2": {
      "metric_score": 42.0,
      "unrated_docs": [
        {
          "_id": "string",
          "_index": "string"
        }
      ],
      "hits": [
        {
          "hit": {
            "_id": "string",
            "_index": "string",
            "_score": 42.0
          },
          "rating": 42.0
        }
      ],
      "metric_details": {
        "additionalProperty1": {
          "additionalProperty1": {},
          "additionalProperty2": {}
        },
        "additionalProperty2": {
          "additionalProperty1": {},
          "additionalProperty2": {}
        }
      }
    }
  },
  "failures": {
    "additionalProperty1": {},
    "additionalProperty2": {}
  }
}