A gradient estimator via L1-randomization for online zero-order optimization with two point feedback

05/27/2022
by   Arya Akhavan, et al.
0

This work studies online zero-order optimization of convex and Lipschitz functions. We present a novel gradient estimator based on two function evaluation and randomization on the ℓ_1-sphere. Considering different geometries of feasible sets and Lipschitz assumptions we analyse online mirror descent algorithm with our estimator in place of the usual gradient. We consider two types of assumptions on the noise of the zero-order oracle: canceling noise and adversarial noise. We provide an anytime and completely data-driven algorithm, which is adaptive to all parameters of the problem. In the case of canceling noise that was previously studied in the literature, our guarantees are either comparable or better than state-of-the-art bounds obtained by <cit.> and <cit.> for non-adaptive algorithms. Our analysis is based on deriving a new Poincaré type inequality for the uniform measure on the ℓ_1-sphere with explicit constants, which may be of independent interest.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset