Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
862008 | Procedia Engineering | 2012 | 8 Pages |
Optimizing the convergence of a Neural Net Classifier (NNC) is an important task to increase the speed and accuracy in the decision-making process. Learning algorithms are used to facilitate such optimization process. Simulated Annealing (SA) and Back propagation (BP) are two popular optimization algorithms. The objective of the study is to compare the optimization performances of SA and BP on a feed-forward NNC that relies more on BP. For the study, five standard datasets, such as WINE, IRIS, DIABETES, TEACHING ASSISTANT EVALUATION (TAE), and GLASS are considered. Experimental results reveal that during training and testing SA has outperformed BP (except for WINE data and that is during testing only). The generalised notion is that BP is one of the best optimisers for NNC. Hence, the experimental results are interesting. The authors assume that it is due to the fact that SA is a much randomized approach to find the global solution, while BP is a gradient-based search and thus has an inclination to be trapped in the local minima. Another important reason could be that BP relies on its best learning (occurred during its training) while classifying test cases, which might not be appropriate for a new dataset. SA, on the other hand, initiates a new search with the given dataset and because of its randomized search space, is able to converge into the global minima. The paper therefore argues that for optimising in a classical NNC, SA is more efficient than BP because of its random and flexible search space.