A Generalization of Self-Improving Algorithms
Ailon et al. [SICOMP'11] proposed self-improving algorithms for sorting and Delaunay triangulation (DT) when the input instances x_1,⋯,x_n follow some unknown product distribution. That is, x_i comes from a fixed unknown distribution 𝒟_i, and the x_i's are drawn independently. After spending O(n^1+ε) time in a learning phase, the subsequent expected running time is O((n+ H)/ε), where H ∈{H_S,H_DT}, and H_S and H_DT are the entropies of the distributions of the sorting and DT output, respectively. In this paper, we allow dependence among the x_i's under the group product distribution. There is a hidden partition of [1,n] into groups; the x_i's in the k-th group are fixed unknown functions of the same hidden variable u_k; and the u_k's are drawn from an unknown product distribution. We describe self-improving algorithms for sorting and DT under this model when the functions that map u_k to x_i's are well-behaved. After an O(poly(n))-time training phase, we achieve O(n + H_S) and O(nα(n) + H_DT) expected running times for sorting and DT, respectively, where α(·) is the inverse Ackermann function.
READ FULL TEXT