Efficiency requires innovation
In estimation a parameter θ∈ R from a sample (x_1,...,x_n) from a population P_θ a simple way of incorporating a new observation x_n+1 into an estimator θ̃_n = θ̃_n(x_1,...,x_n) is transforming θ̃_n to what we call the jackknife extension θ̃_n+1^(e) = θ̃_n+1^(e)(x_1,...,x_n,x_n+1), θ̃_n+1^(e) = {θ̃_n (x_1 ,...,x_n)+ θ̃_n (x_n+1,x_2 ,...,x_n) + ... + θ̃_n (x_1 ,...,x_n-1,x_n+1)}/(n+1). Though θ̃_n+1^(e) lacks an innovation the statistician could expect from a larger data set, it is still better than θ̃_n, var(θ̃_n+1^(e))≤n/n+1 var(θ̃_n). However, an estimator obtained by jackknife extension for all n is asymptotically efficient only for samples from exponential families. For a general P_θ, asymptotically efficient estimators require innovation when a new observation is added to the data. Some examples illustrate the concept.
READ FULL TEXT