Minimal Expected Regret in Linear Quadratic Control

09/29/2021
by   Yassir Jedra, et al.
8

We consider the problem of online learning in Linear Quadratic Control systems whose state transition and state-action transition matrices A and B may be initially unknown. We devise an online learning algorithm and provide guarantees on its expected regret. This regret at time T is upper bounded (i) by O((d_u+d_x)√(d_xT)) when A and B are unknown, (ii) by O(d_x^2log(T)) if only A is unknown, and (iii) by O(d_x(d_u+d_x)log(T)) if only B is unknown and under some mild non-degeneracy condition (d_x and d_u denote the dimensions of the state and of the control input, respectively). These regret scalings are minimal in T, d_x and d_u as they match existing lower bounds in scenario (i) when d_x≤ d_u [SF20], and in scenario (ii) [lai1986]. We conjecture that our upper bounds are also optimal in scenario (iii) (there is no known lower bound in this setting). Existing online algorithms proceed in epochs of (typically exponentially) growing durations. The control policy is fixed within each epoch, which considerably simplifies the analysis of the estimation error on A and B and hence of the regret. Our algorithm departs from this design choice: it is a simple variant of certainty-equivalence regulators, where the estimates of A and B and the resulting control policy can be updated as frequently as we wish, possibly at every step. Quantifying the impact of such a constantly-varying control policy on the performance of these estimates and on the regret constitutes one of the technical challenges tackled in this paper.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset