CoverageBench: Evaluating Information Coverage across Tasks and Domains
作者
Authors
Saron Samuel|Andrew Yates|Dawn Lawrie|Ian Soboroff|Trevor Adriaanse|Benjamin Van Durme|Eugene Yang
期刊
Journal
暂无期刊信息
年份
Year
2026
分类
Category
国家
Country
英国United Kingdom
📝 摘要
Abstract
We wish to measure the information coverage of an ad hoc retrieval algorithm, that is, how much of the range of available relevant information is covered by the search results. Information coverage is a central aspect for retrieval, especially when the retrieval system is integrated with generative models in a retrieval-augmented generation (RAG) system. The classic metrics for ad hoc retrieval, precision and recall, reward a system as more and more relevant documents are retrieved. However, since relevance in ad hoc test collections is defined for a document without any relation to other documents that might contain the same information, high recall is sufficient but not necessary to ensure coverage. The same is true for other metrics such as rank-biased precision (RBP), normalized discounted cumulative gain (nDCG), and mean average precision (MAP). Test collections developed around the notion of diversity ranking in web search incorporate multiple aspects that support a concept of coverage in the web domain. In this work, we construct a suite of collections for evaluating information coverage from existing collections. This suite offers researchers a unified testbed spanning multiple genres and tasks. All topics, nuggets, relevance labels, and baseline rankings are released on Hugging Face Datasets, along with instructions for accessing the publicly available document collections.
📊 文章统计
Article Statistics
基础数据
Basic Stats
419
浏览
Views
0
下载
Downloads
24
引用
Citations
引用趋势
Citation Trend
阅读国家分布
Country Distribution
阅读机构分布
Institution Distribution
月度浏览趋势
Monthly Views