Evaluating search relevance part 1 -  The BEIR benchmark

This is the first in a series of blog posts discussing how to think about evaluating your own search systems in the context of better understanding the BEIR benchmark. We will introduce specific tips and techniques to improve your search evaluation processes in the context of better understanding BEIR. We will also introduce common gotchas which make evaluation less reliable. Finally, we note that LLMs provide a powerful new tool in the search engineers' arsenal and we will show by example how one can use them to help evaluate search.

Understanding the BEIR benchmark in search relevance evaluation

To improve any system you need to be able to measure how well it is doing. In the context of search BEIR (or equivalently the Retrieval section of the MTEB leaderboard) is considered the “holy grail” for the information retrieval community and there is no surprise in that. It’s a very well-structured benchmark with varied datasets across different tasks. More specifically, the following areas are covered:

  • Argument retrieval (ArguAna, Touche2020)
  • Open-domain QA (HotpotQA, Natural Questions, FiQA)
  • Passage retrieval (MSMARCO)
  • Duplicate question retrieval (Quora, CQADupstack)
  • Fact-checking (FEVER, Climate-FEVER, Scifact)
  • Biomedical information retrieval (TREC-COVID, NFCorpus, BioASQ)
  • Entity retrieval (DBPedia)
  • Citation prediction (SCIDOCS)

It provides a single statistic, nDCG@10, related to how well a system matches the most relevant documents for each task example in the top results it returns. For a search system that a human interacts with relevance of top results is critical. However, there are many nuances to evaluating search that a single summary statistic misses.

Structure of a BEIR dataset

Each benchmark has three artefacts:

  • the corpus or documents to retrieve
  • the queries
  • the relevance judgements for the queries (aka qrels).

Relevance judgments are provided as a score which is zero or greater. Non-zero scores indicate that the document is somewhat related to the query.

DatasetCorpus size#Queries in the test set#qrels positively labeled#qrels equal to zero#duplicates in the corpus
Arguana8,6741,4061,406096
Climate-FEVER5,416,5931,5354,68100
DBPedia4,635,92240015,28628,2290
FEVER5,416,5686,6667,93700
FiQA-201857,6386481,70600
HotpotQA5,233,3297,40514,81000
Natural Questions2,681,4683,4524,021016,781
NFCorpus3,63332312,334080
Quora522,93110,00015,67501,092
SCIDOCS25,6571,0004,92825,0002
Scifact5,18330033900
Touche2020382,545499321,9825,357
TREC-COVID171,3325024,76341,6630
MSMARCO8,841,8236,9807,4370324
CQADupstack (sum)457,19913,14523,70300

Table 1: Dataset statistics. The numbers were calculated on the test portion of the datasets (dev for MSMARCO).

Table 1 presents some statistics for the datasets that comprise the BEIR benchmark such as the number of documents in the corpus, the number of queries in the test dataset and the number of positive/negative (query, doc) pairs in the qrels file. From a quick a look in the data we can immediately infer the following:

  • Most of the datasets do not contain any negative relationships in the qrels file, i.e. zero scores, which would explicitly denote documents as irrelevant to the given query.
  • The average number of document relationships per query (#qrels / #queries) varies from 1.0 in the case of ArguAna to 493.5 (TREC-COVID) but with a value <5 for the majority of the cases.
  • Some datasets suffer from duplicate documents in the corpus which in some cases may lead to incorrect evaluation i.e. when a document is considered relevant to a query but its duplicate is not. For example, in ArguAna we have identified 96 cases of duplicate doc pairs with only one doc per pair being marked as relevant to a query. By “expanding” the initial qrels list to also include the duplicates we have observed a relative increase of ~1% in the nDCG@10 score on average.
{
  "_id": "test-economy-epiasghbf-pro02b",
  "title": "economic policy international africa society gender house believes feminisation",
  "text": "Again employment needs to be contextualised with …",
  "metadata": {}
}
{
  "_id": "test-society-epiasghbf-pro02b",
  "title": "economic policy international africa society gender house believes feminisation",
  "text": "Again employment needs to be contextualised with …",
  "metadata": {}
}

Example of duplicate pairs in ArguAna. In the qrels file only the first appears to be relevant (as counter-argument) to query (“test-economy-epiasghbf-pro02a”)

When comparing models on the MTEB leaderboard it is tempting to focus on average retrieval quality. This is a good proxy to the overall quality of the model, but it doesn't necessarily tell you how it will perform for you. Since results are reported per data set, it is worth understanding how closely the different data sets relate to your search task and rescore models using only the most relevant ones. If you want to dig deeper, you can additionally check for topic overlap with the various data set corpuses. Stratifying quality measures by topic gives a much finer-grained assessment of their specific strengths and weaknesses.

One important note here is that when a document is not marked in the qrels file then by default it is considered irrelevant to the query. We dive a little further into this area and collect some evidence to shed more light on the following question: “How often is an evaluator presented with (query, document) pairs for which there is no ground truth information?". The reason that this is important is that when only shallow markup is available (and thus not every relevant document is labeled as such) one Information Retrieval system can be judged worse than another just because it “chooses” to surface different relevant (but unmarked) documents. This is a common gotcha in creating high quality evaluation sets, particularly for large datasets. To be feasible manual labelling usually focuses on top results returned by the current system, so potentially misses relevant documents in its blind spots. Therefore, it is usually preferable to focus more resources on fuller mark up of fewer queries than broad shallow markup.

Leveraging the BEIR benchmark for search relevance evaluation

To initiate our analysis we implement the following scenario (see the notebook):

  1. First, we load the corpus of each dataset into an Elasticsearch index.
  2. For each query in the test set we retrieve the top-100 documents with BM25.
  3. We rerank, the retrieved documents using a variety of SOTA reranking models.
  4. Finally, we report the “judge rate” for the top-10 documents coming from steps 2 (after retrieval) and 3 (after reranking). In other words, we calculate the average percentage of the top-10 documents that have a score in the qrels file.

The list of reranking of models we used is the following:

RetrievalReranking
DatasetBM25 (%)Cohere Rerank v2 (%)Cohere Rerank v3 (%)BGE-base (%)mxbai-rerank-xsmall-v1 (%)MiniLM-L-6-v2 (%)
Arguana7.544.877.874.524.536.84
Climate-FEVER5.756.248.159.367.797.58
DBPedia61.1860.7864.1563.963.567.62
FEVER8.899.9710.0810.199.889.88
FiQa-20187.0211.0210.778.439.19.44
HotpotQA12.5914.514.7615.114.0214.42
Natural Questions5.948.848.718.378.148.34
NFCorpus31.6732.933.9130.6332.7732.45
Quora12.210.4613.0411.2612.5812.78
SCIDOCS8.629.419.718.048.798.52
Scifact9.079.579.779.39.19.17
Touche202038.7830.4132.2433.0637.9633.67
TREC-COVID92.498.498.293.899.697.4
MSMARCO3.976.006.036.075.476.11
CQADupstack (avg.)5.476.326.875.896.226.16

Table 2: Judge rate per (dataset, reranker) pairs calculated on the top-10 retrieved/reranked documents

From Table 2, with the exception of TREC-COVID (>90% coverage), DBPedia (~65%), Touche2020 and nfcorpus (~35%), we see that the majority of the datasets have a labeling rate between 5% and a little more than 10% after retrieval or reranking. This doesn’t mean that all these unmarked documents are relevant but there might be a subset of them -especially those placed in the top positions- that could be positive.

With the arrival of general purpose instruction tuned language models, we have a new powerful tool which can potentially automate judging relevance. These methods are typically far too computationally expensive to be used online for search, but here we are concerned with offline evaluation. In the following we use them to explore the evidence that some of the BEIR datasets suffer from shallow markup.

In order to further investigate this hypothesis we decided to focus on MSMARCO and select a subset of 100 queries along with the top-5 reranked (with Cohere v2) documents which are currently not marked as relevant. We followed two different paths of evaluation: First, we used a carefully tuned prompt (more on this in a later post) to prime the recently released Phi-3-mini-4k model to predict the relevance (or not) of a document to the query. In parallel, these cases were also manually labeled in order to also assess the agreement rate between the LLM output and human judgment. Overall, we can draw the following two conclusions:

  • The agreement rate between the LLM responses and human judgments were a little over 80% which seems good enough as a starting point in that direction.
  • In 54.5% of the cases (based on human judgment) the returned documents were found to be actually relevant to the query. To state this in a different way: For 100 queries we have 107 documents judged to be relevant, but at least 0.545 x 5 x 100 = 273 extra documents which are actually relevant!

Here, some examples drawn from the MSMARCO/dev dataset which contain the query, the annotated positive document (from qrels) and a false negative document due to incomplete markup:

Example 1:

{
  "query":
    {
        "_id": 155234,
        "text": "do bigger tires affect gas mileage"
    },
  "positive_doc":
    {
        "_id": 502713,
        "text": "Tire Width versus Gas Mileage. Tire width is one of the only tire size factors that can influence gas mileage in a positive way. For example, a narrow tire will have less wind resistance, rolling resistance, and weight; thus increasing gas mileage.",
    },
    "negative_doc":
    {
        "_id": 7073658,
        "text": "Tire Size and Width Influences Gas Mileage. There are two things to consider when thinking about tires and their effect on gas mileage; one is wind resistance, and the other is rolling resistance. When a car is driving at higher speeds, it experiences higher wind resistance; this means lower fuel economy."
    }
}

Example 2:

{
  "query":
    {
        "_id": 300674,
        "text": "how many years did william bradford serve as governor of plymouth colony?"
    },
  "positive_doc":
    {
        "_id": 7067032,
        "text": "http://en.wikipedia.org/wiki/William_Bradford_(Plymouth_Colony_governor) William Bradford (c.1590 \u00e2\u0080\u0093 1657) was an English Separatist leader in Leiden, Holland and in Plymouth Colony was a signatory to the Mayflower Compact. He served as Plymouth Colony Governor five times covering about thirty years between 1621 and 1657."
    },
    "negative_doc":
    {
        "_id": 2495763,
        "text": "William Bradford was the governor of Plymouth Colony for 30 years. The colony was founded by people called Puritans. They were some of the first people from England to settle in what is now the United States. Bradford helped make Plymouth the first lasting colony in New England."
    }
}

Manually evaluating specific queries like this is a generally useful technique for understanding search quality that complements quantitive measures like nDCG@10. If you have a representative set of queries you always run when you make changes to search, it gives you important qualitative information about how performance changes, which is invisible in the statistics. For example, it gives you much more insight into the false results your search returns: it can help you spot obvious howlers in retrieved results, classes of related mistakes, such as misinterpreting domain-specific terminology, and so on.

Our result is in agreement with relevant research around MSMARCO evaluation. For example, Arabzadeh et al. follow a similar procedure where they employ crowdsourced workers to make preference judgments: among other things, they show that in many cases the documents returned by the reranking modules are preferred compared to the documents in the MSMARCO qrels file. Another piece of evidence comes from the authors of the RocketQA reranker who report that more than 70% of the reranked documents were found relevant after manual inspection.

Main takeaways & next steps

  • The pursuit for better ground truth is never-ending as it is very crucial for benchmarking and model comparison. LLMs can assist in some evaluation areas if used with caution and tuned with proper instructions

  • More generally, given that benchmarks will never be perfect, it might be preferable to switch from a pure score comparison to more robust techniques capturing statistically significant differences. The work of Arabzadeh et al. provides a nice of example of this where based on their findings they build 95% confidence intervals indicating significant (or not) differences between the various runs. In the accompanying notebook we provide an implementation of confidence intervals using bootstrapping.

  • From the end-user perspective it’s useful to think about task alignment when reading benchmark results. For example, for an AI engineer who builds a RAG pipeline and knows that the most typical use case involves assembling multiple pieces of information from different sources, then it would be more meaningful to assess the performance of their retrieval model on multi-hop QA datasets like HotpotQA instead of the global average across the whole BEIR benchmark

In the next blog post we will dive deeper into the use of Phi-3 as LLM judge and the journey of tuning it to predict relevance.

Ready to try this out on your own? Start a free trial.
Looking to build RAG into your apps? Want to try different LLMs with a vector database?
Check out our sample notebooks for LangChain, Cohere and more on Github, and join Elasticsearch Relevance Engine training now.
Recommended Articles