Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
10368480 | Computer Speech & Language | 2015 | 19 Pages |
Abstract
Our second set of experiments addresses limited-data within-domain adaptation, i.e., adapting an existing model trained on a large set of data using a smaller amount of data from the target sub-domain. Under this challenge, data from the target sub-domain is not available at the time when the language model is trained, but rather becomes available little by little over time. We demonstrate that the implicit interpolation carried out by applying curriculum learning methods to rnnlms outperforms conventional interpolation and has the potential to make more of less adaptation data.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Signal Processing
Authors
Yangyang Shi, Martha Larson, Catholijn M. Jonker,