کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
382590 | 660772 | 2013 | 6 صفحه PDF | دانلود رایگان |
• We address the Question Answering task proposed at ResPubliQA 2009 at CLEF.
• We propose an approach based on IR and validation.
• Our approach obtains some of the best results in the task.
• The results shows the utility of using validation in this task.
ResPubliQA is a Question Answering (QA) evaluation task over European legislation whose first edition was proposed at the Cross Language Evaluation Forum (CLEF) 2009. The exercise consists of extracting a relevant paragraph of text that satisfies the information need expressed by a natural language question. The definition of the task allows to compare current QA technologies with pure Information Retrieval (IR) approaches and to introduce Answer Validation technologies in QA systems. In this paper we describe a system developed for this task. Our system is composed by an IR phase focused on improving QA results, a validation step for removing not promising paragraphs and a module based on n-grams overlapping for selecting the final answer, as well as a selection module that uses Lexical Entailment. While the IR module has contributed to obtain promising results, the performance of the validation module has to be improved. On the other hand, the n-gram ranking improved the results of the ranking given by the IR module.
Journal: Expert Systems with Applications - Volume 40, Issue 15, 1 November 2013, Pages 5811–5816