Article ID Journal Published Year Pages File Type
378551 Cognitive Systems Research 2010 15 Pages PDF
Abstract

Human cognition is characterized by three important features: productivity, dynamics and grounding. These features can be integrated in a neural architecture of language processing. The representations in this architecture always remain “in situ”, because they are grounded in perception, action, emotion, associations and (semantic) relations. The neural architecture shows how these representations can be combined in a productive manner, and how dynamics influences this process. The constraints that each of these features impose on each other result in an architecture in which local and global aspects interact in processing and learning. The architecture consists of neural “binding” mechanisms that produce (novel) sentence structures on the fly. Here, we discuss how the control of this binding process can be learned. We trained a feedforward network (FFN) for this task. The results show that information from the architecture is needed as input to learn control of binding. Thus, the control system is recurrent. We show that this recurrent system can learn control of binding for basic (but recursive) sentence structures. After learning, the binding process behaves well on a series of test sentences, including sentences with (unlimited) embeddings. However, for some of these sentences, difficulties arise due to dynamical binding conflicts in the architecture. We also discuss and illustrate the potential influence that the dynamics in the architecture could have on the binding process.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,