Article ID Journal Published Year Pages File Type
430284 Journal of Computer and System Sciences 2012 19 Pages PDF
Abstract

We study sequential prediction problems in which, at each time instance, the forecaster chooses a vector from a given finite set S⊆RdS⊆Rd. At the same time, the opponent chooses a “loss” vector in RdRd and the forecaster suffers a loss that is the inner product of the two vectors. The goal of the forecaster is to achieve that, in the long run, the accumulated loss is not much larger than that of the best possible element in SS. We consider the “bandit” setting in which the forecaster only has access to the losses of the chosen vectors (i.e., the entire loss vectors are not observed). We introduce a variant of a strategy by Dani, Hayes and Kakade achieving a regret bound that, for a variety of concrete choices of SS, is of order ndln|S| where n   is the time horizon. This is not improvable in general and is better than previously known bounds. The examples we consider are all such that S⊆{0,1}dS⊆{0,1}d, and we show how the combinatorial structure of these classes can be exploited to improve the regret bounds. We also point out computationally efficient implementations for various interesting choices of SS.

► Online linear optimization under bandit feedback. ► Focus on combinatorial decision spaces. ► GeometricHedge algorithm with uniform exploration. ► Regret bounds tight in several cases. ► Known efficient implementation in some cases.

Related Topics
Physical Sciences and Engineering Computer Science Computational Theory and Mathematics
Authors
, ,