Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
696946 | Automatica | 2012 | 7 Pages |
This paper deals with the finite horizon stochastic optimal control problem with the expectation of the pp-norm as the objective function and jointly Gaussian, although not necessarily independent, additive disturbance process. We develop an approximation strategy that solves the problem in a certain class of nonlinear feedback policies while ensuring satisfaction of hard input constraints. A bound on suboptimality of the proposed strategy in this class of nonlinear feedback policies is given for the special case of p=1p=1. We also develop a recursively feasible receding horizon policy with respect to state chance constraints and/or hard control input constraints in the presence of bounded disturbances. The performance of the proposed policies is examined in two numerical examples.