Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4945040 | Information Systems | 2017 | 8 Pages |
Abstract
Automatically generating text of high quality in tasks such as translation, summarization, and narrative writing is difficult as these tasks require creativity, which only humans currently exhibit. However, crowdsourcing such tasks is still a challenge as they are tedious for humans and can require expert knowledge. We thus explore deployment strategies for crowdsourcing text creation tasks to improve the effectiveness of the crowdsourcing process. We consider effectiveness through the quality of the output text, the cost of deploying the task, and the latency in obtaining the output. We formalize a deployment strategy in crowdsourcing along three dimensions: work structure, workforce organization, and work style. Work structure can either be simultaneous or sequential, workforce organization independent or collaborative, and work style either by humans only or by using a combination of machine and human intelligence. We implement these strategies for translation, summarization, and narrative writing tasks by designing a semi-automatic tool that uses the Amazon Mechanical Turk API and experiment with them in different input settings such as text length, number of sources, and topic popularity. We report our findings regarding the effectiveness of each strategy and provide recommendations to guide requesters in selecting the best strategy when deploying text creation tasks.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Ria Mae Borromeo, Thomas Laurent, Motomichi Toyama, Maha Alsayasneh, Sihem Amer-Yahia, Vincent Leroy,