کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
6864508 1439543 2018 10 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Lattice-to-sequence attentional Neural Machine Translation models
ترجمه فارسی عنوان
مدل های ترجمه محرک عاملی توجه به رشته به دنباله
کلمات کلیدی
ترجمه ماشین عصبی، شبکه ی کلمه، شبکه عصبی مکرر، واحد تکراری دروغ
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
چکیده انگلیسی
The dominant Neural Machine Translation (NMT) models usually resort to word-level modeling to embed input sentences into semantic space. However, it may not be optimal for the encoder modeling of NMT, especially for languages where tokenizations are usually ambiguous: On one hand, there may be tokenization errors which may negatively affect the encoder modeling of NMT. On the other hand, the optimal tokenization granularity is unclear for NMT. In this paper, we propose lattice-to-sequence attentional NMT models, which generalize the standard Recurrent Neural Network (RNN) encoders to lattice topology. Specifically, they take as input a word lattice which compactly encodes many tokenization alternatives, and learn to generate the hidden state for the current step from multiple inputs and hidden states in previous steps. Compared with the standard RNN encoder, the proposed encoders not only alleviate the negative impact of tokenization errors but are more expressive and flexible as well for encoding the meaning of input sentences. Experimental results on both Chinese-English and Japanese-English translations demonstrate the effectiveness of our models.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 284, 5 April 2018, Pages 138-147
نویسندگان
, , , , ,