Regret Minimization in Repeated Games: A Set-Valued Dynamic Programming Approach
The regret-minimization paradigm has emerged as an effective technique for designing algorithms for online decision-making in adversarial environments. But so far, designing exact optimal policies that minimize the worst-case regret has proven to be a difficult task in general. In this paper, we present a novel set-valued dynamic programming approach for characterizing such regret-optimal policies in repeated games with discounted losses and finite action sets. Our approach first draws the connection between worst-case regret minimization and determining minimal achievable guarantees in repeated games with vector-valued losses. We then characterize the set of these minimal guarantees as the fixed point of a dynamic programming operator defined on the space of Pareto frontiers of convex and compact sets. This approach simultaneously results in the characterization of the optimal strategies that achieve these minimal guarantees, and hence of regret-optimal strategies in the original repeated game. Based on this characterization, we propose a procedure based on approximate value iteration to compute ϵ-regret-optimal strategies for any ϵ>0, for the case where the decision maker has only two available actions. As an illustration of this approach, we design a simple near-optimal strategy for the problem of prediction using expert advice for the case of two experts.
READ FULL TEXT