Article ID Journal Published Year Pages File Type
4962248 Procedia Computer Science 2016 17 Pages PDF
Abstract

How do brains learn which features matter how much, when and for what purposes? A specific feature may matter more or less for recognitions of different learned patterns, and in different contexts and attentional foci. Simple executable “neural circuits” built from biologically-inspired reusable memory pattern components in the NeurOS™ and NeuroBlocks™ technology1 model and implement a range of learning and dynamic contextual/situational/attentional feature relevance. A pattern is a collection of weighted features, roughly analogous to a neuron or neuron assembly. New patterns are created for sufficiently novel feature combinations. Individual feature weights in best-matching existing patterns grow or diminish with repetition, yielding patterns that adjust to repeated experience. Arbitrarily complex classification meshes typical of human knowledge are easily assembled by varying a simple novelty parameter. Cascading pattern recognitions build up layers of concrete to abstract feature vocabularies. Names or labels are modeled as synonyms for experience patterns. Context can be modeled as yet another feature, derived from recent activity, to discriminate among otherwise similar patterns. Attention can be modeled as broad dynamic parameters modulating feature signal strengths.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science (General)
Authors
,