|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|4966379||1365117||2018||23 صفحه PDF||ندارد||دانلود کنید|
â¢We proposed a novel learning-to-rank based topic selection method to more intelligently design topic sets in test collections. Our method can be used to select the best topics from a topic pool in order to maximize the reliability of evaluation while reducing the needed human judging effort.â¢We revisited shallow vs. deep judging using our intelligent topic selection method and considering a wider range of factors impacting this trade-off than previously considered. Based on our extensive experiments, our findings are as follows.â¢Shallow judging is preferable than deep judging if topics are selected randomly, confirming findings of prior work. However, when topics are selected intelligently, deep judging often achieves greater evaluation reliability for the same evaluation budget than shallow judging.â¢As the topic generation cost increases, deep judging becomes more costeffective than shallow judging in optimizing the evaluation budget.â¢Assuming that judging speed increases as more documents for the same topic are judged, increased judging speed has significant effect on evaluation reliability, suggesting that it should be another parameter to be considered in deep vs. shallow judging trade-off.â¢Assuming that short topic generation times reduce the quality of topics, and thereby, relevance judgments consistency, it is better to invest a portion of our evaluation budget to increase quality of topics, instead of collecting more judgments for low-quality topics.
While test collections provide the cornerstone for Cranfield-based evaluation of information retrieval (IR) systems, it has become practically infeasible to rely on traditional pooling techniques to construct test collections at the scale of todayâs massive document collections (e.g., ClueWeb12âs 700M+ Webpages). This has motivated a flurry of studies proposing more cost-effective yet reliable IR evaluation methods. In this paper, we propose a new intelligent topic selection method which reduces the number of search topics (and thereby costly human relevance judgments) needed for reliable IR evaluation. To rigorously assess our method, we integrate previously disparate lines of research on intelligent topic selection and deep vs. shallow judging (i.e., whether it is more cost-effective to collect many relevance judgments for a few topics or a few judgments for many topics). While prior work on intelligent topic selection has never been evaluated against shallow judging baselines, prior work on deep vs. shallow judging has largely argued for shallowed judging, but assuming random topic selection. We argue that for evaluating any topic selection method, ultimately one must ask whether it is actually useful to select topics, or should one simply perform shallow judging over many topics? In seeking a rigorous answer to this over-arching question, we conduct a comprehensive investigation over a set of relevant factors never previously studied together: 1) method of topic selection; 2) the effect of topic familiarity on human judging speed; and 3) how different topic generation processes (requiring varying human effort) impact (i) budget utilization and (ii) the resultant quality of judgments. Experiments on NIST TREC Robust 2003 and Robust 2004 test collections show that not only can we reliably evaluate IR systems with fewer topics, but also that: 1) when topics are intelligently selected, deep judging is often more cost-effective than shallow judging in evaluation reliability; and 2) topic familiarity and topic generation costs greatly impact the evaluation cost vs. reliability trade-off. Our findings challenge conventional wisdom in showing that deep judging is often preferable to shallow judging when topics are selected intelligently.
Journal: Information Processing & Management - Volume 54, Issue 1, January 2018, Pages 37-59