Computational Bayes-Predictive Stochastic Programming: Finite Sample Bound
We study stochastic programming models where the stochastic variable is only known up to a parametrized distribution function, which must be estimated from a set of independent and identically distributed (i.i.d.) samples. We take a Bayesian approach, positing a prior distribution over the unknown parameter and computing a posterior predictive distribution over future values of the stochastic variable. A data-driven stochastic program is then solved with respect to this predictive posterior distribution. While this forms the standard Bayesian decision-theoretic approach, we focus on problems where calculating the predictive distribution is intractable, a typical situation in modern applications with large datasets, high-dimensional parameters, and heterogeneity due to observed covariates and latent group structure. Rather than constructing sampling approximations to the intractable distribution using standard Markov chain Monte Carlo methods, we study computational approaches to decision-making based on the modern optimization-based methodology of variational Bayes. We consider two approaches, a two-stage approach where a posterior approximation is constructed and then used to solve the decision problem, and an approach that jointly solves the optimization and decision problems. We analyze the finite sample performance of the value and optimal decisions of the resulting data-driven stochastic programs.
READ FULL TEXT