On the Finite Optimal Convergence of Logic-Based Benders' Decomposition in Solving 0-1 Min-max Regret Optimization Problems with Interval Costs
This paper addresses a class of problems under interval data uncertainty composed of min-max regret versions of classical 0-1 optimization problems with interval costs. We refer to them as interval 0-1 min-max regret problems. The state-of-the-art exact algorithms for this class of problems work by solving a corresponding mixed integer linear programming formulation in a Benders' decomposition fashion. Each of the possibly exponentially many Benders' cuts is separated on the fly through the resolution of an instance of the classical 0-1 optimization problem counterpart. Since these separation subproblems may be NP-hard, not all of them can be modeled by means of linear programming, unless P = NP. In these cases, the convergence of the aforementioned algorithms are not guaranteed in a straightforward manner. In fact, to the best of our knowledge, their finite convergence has not been explicitly proved for any interval 0-1 min-max regret problem. In this work, we formally describe these algorithms through the definition of a logic-based Benders' decomposition framework and prove their convergence to an optimal solution in a finite number of iterations. As this framework is applicable to any interval 0-1 min-max regret problem, its finite optimal convergence also holds in the cases where the separation subproblems are NP-hard.
READ FULL TEXT