Smoothed f-Divergence Distributionally Robust Optimization: Exponential Rate Efficiency and Complexity-Free Calibration

06/24/2023
by   Zhenyuan Liu, et al.
0

In data-driven optimization, sample average approximation is known to suffer from the so-called optimizer's curse that causes optimistic bias in evaluating the solution performance. This can be tackled by adding a "margin" to the estimated objective value, or via distributionally robust optimization (DRO), a fast-growing approach based on worst-case analysis, which gives a protective bound on the attained objective value. However, in all these existing approaches, a statistically guaranteed bound on the true solution performance either requires restrictive conditions and knowledge on the objective function complexity, or otherwise exhibits an over-conservative rate that depends on the distribution dimension. We argue that a special type of DRO offers strong theoretical advantages in regard to these challenges: It attains a statistical bound on the true solution performance that is the tightest possible in terms of exponential decay rate, for a wide class of objective functions that notably does not hinge on function complexity. Correspondingly, its calibration also does not require any complexity information. This DRO uses an ambiguity set based on a KL-divergence smoothed by the Wasserstein or Levy-Prokhorov distance via a suitable distance optimization. Computationally, we also show that such a DRO, and its generalized version using smoothed f-divergence, is not much harder than standard DRO problems using the f-divergence or Wasserstein distance, thus supporting the strengths of such DRO as both statistically optimal and computationally viable.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset