| Article ID | Journal | Published Year | Pages | File Type | 
|---|---|---|---|---|
| 410789 | Neurocomputing | 2008 | 4 Pages | 
Abstract
												The gradient descent algorithms like backpropagation (BP) or its variations on multi-layered feed-forward networks are widely used in many applications. However, the most serious problem associated with the BP is local minima problem. Especially, an exceeding number of hidden nodes make the corresponding network deepen the local minima problem. We propose an algorithm which shows stable performance on training despite of the large number of hidden nodes. This algorithm is called separate learning algorithm in which hidden-to-output and input-to-hidden separately trained. Simulations on some benchmark problems have been performed to demonstrate the validity of the proposed method.
Related Topics
												
													Physical Sciences and Engineering
													Computer Science
													Artificial Intelligence
												
											Authors
												Bumghi Choi, Ju-Hong Lee, Deok-Hwan Kim, 
											