Article ID Journal Published Year Pages File Type
10362201 Pattern Recognition Letters 2005 11 Pages PDF
Abstract
Convolution kernels and recursive neural networks are both suitable approaches for supervised learning when the input is a discrete structure like a labeled tree or graph. We compare these techniques in two natural language problems. In both problems, the learning task consists in choosing the best alternative tree in a set of candidates. We report about an empirical evaluation between the two methods on a large corpus of parsed sentences and speculate on the role played by the representation and the loss function.
Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , , ,