Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6864203 | Neurocomputing | 2018 | 12 Pages |
Abstract
Heuristic and deterministic optimization methods are extensively applied for the training of artificial neural networks. Both of these methods have their own advantages and disadvantages. Heuristic stochastic optimization methods like genetic algorithm perform global search, but they suffer from the problem of slow convergence rate near global optimum. On the other hand deterministic methods like gradient descent exhibit a fast convergence rate around global optimum but may get stuck in a local optimum. Motivated by these problems, a hybrid learning algorithm combining genetic algorithm (GA) with gradient descent (GD), called HGAGD, is proposed in this paper. The new algorithm combines the global exploration ability of GA with the accurate local exploitation ability of GD to achieve a faster convergence and also a better accuracy of final solution. The HGAGD is then employed as a new training method to optimize the parameters of a quantum-inspired neural network (QINN) for two different applications. Firstly, two benchmark functions are chosen to demonstrate the potential of the proposed QINN with the HGAGD algorithm in dealing with function approximation problems. Next, the performance of the proposed method in forecasting Mackey-Glass time series and Lorenz attractor is studied. The results of these studies show the superiority of the introduced approach over other published approaches.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Soheil Ganjefar, Morteza Tofighi,