Statistical Testing under Distributional Shifts
Statistical hypothesis testing is a central problem in empirical inference. Observing data from a distribution P^*, one is interested in the hypothesis P^* ∈ H_0 and requires any test to control the probability of false rejections. In this work, we introduce statistical testing under distributional shifts. We are still interested in a target hypothesis P^* ∈ H_0, but observe data from a distribution Q^* in an observational domain. We assume that P^* is related to Q^* through a known shift τ and formally introduce a framework for hypothesis testing in this setting. We propose a general testing procedure that first resamples from the n observed data points to construct an auxiliary data set (mimicking properties of P^*) and then applies an existing test in the target domain. We prove that this procedure holds pointwise asymptotic level – if the target test holds pointwise asymptotic level, the size of the resample is at most o(√(n)), and the resampling weights are well-behaved. We further show that if the map τ is unknown, it can, under mild conditions, be estimated from data, maintaining level guarantees. Testing under distributional shifts allows us to tackle a diverse set of problems. We argue that it may prove useful in reinforcement learning, we show how it reduces conditional to unconditional independence testing and we provide example applications in causal inference. Code is easy-to-use and will be available online.
READ FULL TEXT