Article ID Journal Published Year Pages File Type
408387 Neurocomputing 2007 13 Pages PDF
Abstract

In standard BP-networks, hidden neuron outputs are usually spread over the whole interval (0,1)(0,1). In this paper, we propose an efficient framework to enforce a transparent internal knowledge representation in BP-networks during training. We want the formed internal representations to differ as much as possible for different outputs. At the same time, the hidden neuron outputs will be forced to group around three possible values, namely 1, 0 and 0.5. We will call such an internal representation unambiguous and condensed. The performance of BP-networks with enforced internal representations will be examined in a case study devoted to semantic image classification.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,