Article ID Journal Published Year Pages File Type
2077104 Biosystems 2008 7 Pages PDF
Abstract

Artificial feed-forward neural networks are commonly used as a tool for modelling stimulus selection and animal signalling. A key finding of stimulus selection research has been generalization: if a given behaviour has been established to one stimulus, perceptually similar novel stimuli are likely to induce a similar response. Stimulus generalization, in feed-forward neural networks, automatically arises as a property of the network. This network property raises understandable concern regarding the sensitivity of the network to variation in its internal parameter values used in relation to its structure and to its training process. Researchers must have confidence that the predictions of their model follow from the underlying biology that they deliberately incorporated in the model, and not from often arbitrary choices about model implementation. We study how network training and parameter perturbations influence the qualitative and quantitative behaviour of a simple but general network. Specifically, for models of stimulus control we study the effect that parameter variation has on the shape of the generalization curves produced by the network. We show that certain network and training conditions produce undesirable artifacts that need to be avoided (or at least understood) when modelling stimulus selection.

Related Topics
Physical Sciences and Engineering Mathematics Modelling and Simulation
Authors
, ,