Assumption-lean falsification tests of rate double-robustness of double-machine-learning estimators

06/18/2023
by   Lin Liu, et al.
0

In this article we develop a feasible version of the assumption-lean tests in Liu et al. 20 that can falsify an analyst's justification for the validity of a reported nominal (1 - α) Wald confidence interval (CI) centered at a double machine learning (DML) estimator for any member of the class of doubly robust (DR) functionals studied by Rotnitzky et al. 21. The class of DR functionals is broad and of central importance in economics and biostatistics. It strictly includes both (i) the class of mean-square continuous functionals that can be written as an expectation of an affine functional of a conditional expectation studied by Chernozhukov et al. 22 and the class of functionals studied by Robins et al. 08. The present state-of-the-art estimators for DR functionals ψ are DML estimators ψ̂_1. The bias of ψ̂_1 depends on the product of the rates at which two nuisance functions b and p are estimated. Most commonly an analyst justifies the validity of her Wald CIs by proving that, under her complexity-reducing assumptions, the Cauchy-Schwarz (CS) upper bound for the bias of ψ̂_1 is o (n^- 1 / 2). Thus if the hypothesis H_0: the CS upper bound is o (n^- 1 / 2) is rejected by our test, we will have falsified the analyst's justification for the validity of her Wald CIs. In this work, we exhibit a valid assumption-lean falsification test of H_0, without relying on complexity-reducing assumptions on b, p, or their estimates b̂, p̂. Simulation experiments are conducted to demonstrate how the proposed assumption-lean test can be used in practice. An unavoidable limitation of our methodology is that no assumption-lean test of H_0, including ours, can be a consistent test. Thus failure of our test to reject is not meaningful evidence in favor of H_0.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset