Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
752170 | Systems & Control Letters | 2013 | 7 Pages |
Abstract
The problem of controlling the state of a system, from a given initial condition, during a fixed time interval minimizing at the same time a criterion of optimality is commonly referred to as finite-horizon optimal control problem. One of the standard approaches to the finite-horizon optimal control problem relies upon the solution of the Hamilton–Jacobi–Bellman (HJB) partial differential equation, which may be difficult or impossible to obtain in closed-form. Herein we propose a methodology to avoid the explicit solution of the HJB pde exploiting a dynamic extension. This results in a dynamic time-varying state feedback yielding an approximate solution to the finite-horizon optimal control problem.
Keywords
Related Topics
Physical Sciences and Engineering
Engineering
Control and Systems Engineering
Authors
M. Sassano, A. Astolfi,