Online Learning with Adversaries: A Differential Inclusion Analysis

04/04/2023
by   Swetha Ganesh, et al.
0

We consider the measurement model Y = AX, where X and, hence, Y are random variables and A is an a priori known tall matrix. At each time instance, a sample of one of Y's coordinates is available, and the goal is to estimate μ := 𝔼[X] via these samples. However, the challenge is that a small but unknown subset of Y's coordinates are controlled by adversaries with infinite power: they can return any real number each time they are queried for a sample. For such an adversarial setting, we propose the first asynchronous online algorithm that converges to μ almost surely. We prove this result using a novel differential inclusion based two-timescale analysis. Two key highlights of our proof include: (a) the use of a novel Lyapunov function for showing that μ is the unique global attractor for our algorithm's limiting dynamics, and (b) the use of martingale and stopping time theory to show that our algorithm's iterates are almost surely bounded.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset