Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6940728 | Pattern Recognition Letters | 2018 | 5 Pages |
Abstract
A dialogue system should capture speakers' intentions, which can be represented by combinations of speech acts, predicators, and sentiments. To identify these intentions from speakers' utterances, many studies have independently dealt with speech acts, predicators, and sentiments. However, these three elements composing speakers' intentions are tightly associated with each other. To resolve this problem, we propose a convolutional neural network model that simultaneously identifies speech acts, predicators, and sentiments. The proposed model has well-designed hidden layers for embedding informative abstractions appropriate for speech act identification, predicator identification, and sentiment identification. Nodes in the hidden layers are partially trained by three cycles of error backpropagation: training the nodes associated with speech act identification, predicator identification, and sentiment identification. In the experiments, the proposed model showed higher F1-scores than independent models: 6.8% higher in speech act identification, 6.2% higher in predicator identification, and 4.9% higher in sentiment identification. Based on the experimental results, we conclude that the proposed integration architecture and partial error backpropagation can help to increase the performance of intention identification.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Vision and Pattern Recognition
Authors
Minkyoung Kim, Harksoo Kim,