کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
515043 | 866940 | 2011 | 14 صفحه PDF | دانلود رایگان |

Traditional Information Retrieval (IR) models assume that the index terms of queries and documents are statistically independent of each other, which is intuitively wrong. This paper proposes the incorporation of the lexical and syntactic knowledge generated by a POS-tagger and a syntactic Chunker into traditional IR similarity measures for including this dependency information between terms. Our proposal is based on theories of discourse structure by means of the segmentation of documents and queries into sentences and entities. Therefore, we measure dependencies between entities instead of between terms. Moreover, we handle discourse references for each entity. It has been evaluated on Spanish and English corpora as well as on Question Answering tasks obtaining significant increases.
Research highlights
► Parsing the query to obtain the set of query terms to calculate the Term Proximity (TP) information.
► Applying different TP measures depending on the lexical type of each query term.
► Applying TP measures to phrases as well as terms.
► Obtaining consistent results in the worst conditions reported by previous research.
Journal: Information Processing & Management - Volume 47, Issue 5, September 2011, Pages 692–705