کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
515603 867049 2012 14 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Using crowdsourcing for TREC relevance assessment
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر نرم افزارهای علوم کامپیوتر
پیش نمایش صفحه اول مقاله
Using crowdsourcing for TREC relevance assessment
چکیده انگلیسی

Crowdsourcing has recently gained a lot of attention as a tool for conducting different kinds of relevance evaluations. At a very high level, crowdsourcing describes outsourcing of tasks to a large group of people instead of assigning such tasks to an in-house employee. This crowdsourcing approach makes possible to conduct information retrieval experiments extremely fast, with good results at a low cost.This paper reports on the first attempts to combine crowdsourcing and TREC: our aim is to validate the use of crowdsourcing for relevance assessment. To this aim, we use the Amazon Mechanical Turk crowdsourcing platform to run experiments on TREC data, evaluate the outcomes, and discuss the results. We make emphasis on the experiment design, execution, and quality control to gather useful results, with particular attention to the issue of agreement among assessors. Our position, supported by the experimental results, is that crowdsourcing is a cheap, quick, and reliable alternative for relevance assessment.


► Amazon Mechanical Turk Workers as relevance assessors for TREC collection.
► Experimental study measures agreement, reliability, speed, and cheapness.
► Results demonstrate that crowdsourcing quality is comparable to TREC assessors.
► Results for both TREC (binary) and University of Tampere (graded) assessments.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Information Processing & Management - Volume 48, Issue 6, November 2012, Pages 1053–1066
نویسندگان
, ,