Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6862825 | Neural Networks | 2018 | 25 Pages |
Abstract
Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Memory Networks but features a special memory that stores relational tuples, mimicking the “Image Schema” in human cognitive activities. We redefine textual reasoning as a sequential decision-making process modifying or retrieving from the memory, where logic rules serve as state-transition functions. Activating logic rules for reasoning involves two problems: variable binding and relation activating, and this is a first step to solve them jointly. Our model achieves an average error rate of 0.7% on bAbI-20, a widely-used synthetic reasoning benchmark, using less than 1k training samples and no supporting facts.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Yiqun Yao, Jiaming Xu, Jing Shi, Bo Xu,