Article ID Journal Published Year Pages File Type
4947178 Neurocomputing 2017 7 Pages PDF
Abstract
In this paper, a smooth function is constructed to approximate the nonsmooth output of max -min  fuzzy neural networks (FNNs) and its approximation is also presented. In place of the output of max -min  FNNs by its smoothing approximation function, the error function, defining the discrepancy between the actual outputs and desired outputs of max -min  FNNs, becomes a continuously differentiable function. Then, a smoothing gradient decent-based algorithm with Armijo-Goldstein step size rule is formulated to train max -min  FNNs. Based on the existing convergent result, the convergence of our proposed algorithm can easily be obtained. Furthermore, the proposed algorithm also provides a feasible procedure to solve fuzzy relational equations with max -min composition. Finally, some numerical examples are implemented to support our results and demonstrate that the proposed smoothing algorithm has better learning performance than other two gradient decent-based algorithms.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,