How are policy gradient methods affected by the limits of control?

06/14/2022
by   Ingvar Ziemann, et al.
5

We study stochastic policy gradient methods from the perspective of control-theoretic limitations. Our main result is that ill-conditioned linear systems in the sense of Doyle inevitably lead to noisy gradient estimates. We also give an example of a class of stable systems in which policy gradient methods suffer from the curse of dimensionality. Our results apply to both state feedback and partially observed systems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset