Constrained Online Two-stage Stochastic Optimization: New Algorithms via Adversarial Learning

02/02/2023
by   Jiashuo Jiang, et al.
0

We consider an online two-stage stochastic optimization with long-term constraints over a finite horizon of T periods. At each period, we take the first-stage action, observe a model parameter realization and then take the second-stage action from a feasible set that depends both on the first-stage decision and the model parameter. We aim to minimize the cumulative objective value while guaranteeing that the long-term average second-stage decision belongs to a set. We propose a general algorithmic framework that derives online algorithms for the online two-stage problem from adversarial learning algorithms. Also, the regret bound of our algorithm cam be reduced to the regret bound of embedded adversarial learning algorithms. Based on our framework, we obtain new results under various settings. When the model parameter at each period is drawn from identical distributions, we derive state-of-art regret bound that improves previous bounds under special cases. Our algorithm is also robust to adversarial corruptions of model parameter realizations. When the model parameters are drawn from unknown non-stationary distributions and we are given prior estimates of the distributions, we develop a new algorithm from our framework with a regret O(W_T+√(T)), where W_T measures the total inaccuracy of the prior estimates.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset