Article ID Journal Published Year Pages File Type
510550 Computers & Structures 2006 6 Pages PDF
Abstract

An evolutionary method of training a neural network is described and illustrated. Fundamental changes are needed in the usual training methods due to the need to provide the algorithm with discrete values of the variables (weights). In order to give the algorithm freedom to select weights from an unlimited range of values, mutation in integer variables produces a progressive ‘shift’ of the centre of the range of positive/negative values provided for selection. At each iteration the range of integer values offered to the algorithm is randomly selected. The variables are mutated in shuffled order; each successful mutation is captured by the algorithm, unsuccessful mutations are rejected. As the error progresses towards the target level, the rate of progress is controlled by progressively adapting the numerical range within which the mutation shifts are applied.The method is used to train illustrative networks to predict values of a simple trigonometric function, to provide an approximate analysis of reinforced concrete deep beams and to predict overall buckling loads for rectangular hollow steel sections. The results obtained using the new algorithm, are compared with those from conventional back-propagation (BP) training and with ‘exact’ results.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
,