Learning the Kalman Filter with Fine-Grained Sample Complexity

01/30/2023
by   Xiangyuan Zhang, et al.
0

We develop the first end-to-end sample complexity of model-free policy gradient (PG) methods in discrete-time infinite-horizon Kalman filtering. Specifically, we introduce the receding-horizon policy gradient (RHPG-KF) framework and demonstrate 𝒪̃(ϵ^-2) sample complexity for RHPG-KF in learning a stabilizing filter that is ϵ-close to the optimal Kalman filter. Notably, the proposed RHPG-KF framework does not require the system to be open-loop stable nor assume any prior knowledge of a stabilizing filter. Our results shed light on applying model-free PG methods to control a linear dynamical system where the state measurements could be corrupted by statistical noises and other (possibly adversarial) disturbances.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset