Article ID Journal Published Year Pages File Type
391657 Information Sciences 2016 13 Pages PDF
Abstract

In this paper, the robust optimal control of continuous-time affine nonlinear systems with matched uncertainties is investigated by using a data-based integral policy iteration approach. It is a natural extension of the traditional optimal control design, under the framework of adaptive dynamic programming (ADP) method, to robust optimal control of nonlinear systems with matched uncertainties. In theoretical aspect, by increasing a feedback gain to the optimal controller of the nominal system, the robust controller of the matched uncertain system is obtained, which also achieves optimality with a newly well-defined cost function. When regarding the implementation, the data-based integral policy iteration algorithm is used to solve the Hamilton–Jacobi–Bellman equation corresponding to the nominal system with completely unknown dynamics information. Then, the actor-critic technique based on neural networks and least squares implementation method are employed to facilitate deriving the optimal control law iteratively, so that the closed-form expression of the robust optimal controller is available. Additionally, two simulation examples with application backgrounds are presented to illustrate the effectiveness of the established robust optimal control scheme. In summary, it is important to note that the result developed in this paper broadens the application scope of ADP-based optimal control approach to more general nonlinear systems possessing dynamical uncertainties.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,