Convergence rates for smooth k-means change-point detection
In this paper, we consider the estimation of a change-point for possibly high-dimensional data in a Gaussian model, using a k-means method. We prove that, up to a logarithmic term, this change-point estimator has a minimax rate of convergence. Then, considering the case of sparse data, with a Sobolev regularity, we propose a smoothing procedure based on Lepski's method and show that the resulting estimator attains the optimal rate of convergence. Our results are illustrated by some simulations. As the theoretical statement relying on Lepski's method depends on some unknown constant, practical strategies are suggested to perform an optimal smoothing.
READ FULL TEXT