کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
412618 | 679661 | 2011 | 12 صفحه PDF | دانلود رایگان |

This paper addresses the problem of tuning the input and the output parameters of a fuzzy logic controller. The system learns autonomously without supervision or a priori training data. Two novel techniques are proposed. The first technique combines Q(λλ)-learning with function approximation (fuzzy inference system) to tune the parameters of a fuzzy logic controller operating in continuous state and action spaces. The second technique combines Q(λλ)-learning with genetic algorithms to tune the parameters of a fuzzy logic controller in the discrete state and action spaces. The proposed techniques are applied to different pursuit–evasion differential games. The proposed techniques are compared with the classical control strategy, Q(λλ)-learning only, reward-based genetic algorithms learning, and with the technique proposed by Dai et al. (2005) [19] in which a neural network is used as a function approximation for Q-learning. Computer simulations show the usefulness of the proposed techniques.
Research highlights
► Robots learn to play the pursuit-evasion differential game.
► Autonomous learning of the input and output parameters of a FLC.
► Q(λ)Q(λ)-learning tunes the parameters of a FIS critic over a continuous space.
► Genetic algorithms continuously improve the performance of the FLC.
► Q(λ)Q(λ)-learning followed by genetic algorithms has the shortest learning time.
Journal: Robotics and Autonomous Systems - Volume 59, Issue 1, January 2011, Pages 22–33