کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
535020 | 870312 | 2016 | 7 صفحه PDF | دانلود رایگان |
• We create text representations by weighing word embeddings using idf information.
• A novel median-based loss is designed to mitigate the negative effect of outliers.
• A dataset of semantically related textual pairs from Wikipedia and Twitter is made.
• Our method outperforms all word embedding baselines in a semantic similarity task.
• Our method is out-of-the-box and thus requires no retraining in different contexts.
Short text messages such as tweets are very noisy and sparse in their use of vocabulary. Traditional textual representations, such as tf-idf, have difficulty grasping the semantic meaning of such texts, which is important in applications such as event detection, opinion mining, news recommendation, etc. We constructed a method based on semantic word embeddings and frequency information to arrive at low-dimensional representations for short texts designed to capture semantic similarity. For this purpose we designed a weight-based model and a learning procedure based on a novel median-based loss function. This paper discusses the details of our model and the optimization methods, together with the experimental results on both Wikipedia and Twitter data. We find that our method outperforms the baseline approaches in the experiments, and that it generalizes well on different word embeddings without retraining. Our method is therefore capable of retaining most of the semantic information in the text, and is applicable out-of-the-box.
Journal: Pattern Recognition Letters - Volume 80, 1 September 2016, Pages 150–156