Article ID Journal Published Year Pages File Type
9653394 Neurocomputing 2005 19 Pages PDF
Abstract
Self-organizing maps (SOMs) have become popular for tasks in data visualization, pattern classification or natural language processing and can be seen as one of the major concepts for artificial neural networks of today. Their general idea is to approximate a high dimensional and previously unknown input distribution by a lower-dimensional neural network structure with the goal to model the topology of the input space as close as possible. Classical SOMs read the input values in random but sequential order one by one and thus adjust the network structure over space: the network will be built while reading larger and larger parts of the input. In contrast to this approach, we present a SOM that processes the whole input in parallel and organizes itself over time. The main reason for parallel input processing lies in the fact that knowledge can be used to recognize parts of patterns in the input space that have already been learned. This way, networks can be developed that do not reorganize their structure from scratch every time a new set of input vectors is presented, but rather adjust their internal architecture in accordance with previous mappings. One basic application could be a modeling of the whole-part relationship through layered architectures.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,