Understanding and Eliminating the Large-kernel Effect in Blind Deconvolution
Blind deconvolution consists of recovering a clear version of an observed blurry image without specific knowledge of the degradation kernel. The kernel size, however, is a required hyper-parameter that defines the range of the support domain. In this study, we experimentally and theoretically show how large kernel sizes introduce noises to expected zeros in the kernel and yield inferior results. We explain this effect by demonstrating that sizeable kernels lower the squares cost in optimization. We also prove that this effect persists with a probability of one for noisy images. Using 1D simulation, we quantify the increment of error of estimated kernel with its size. To eliminate this effect, we propose a low-rank based penalty that reflects structural information of the kernel. Compared to the generic ℓ_α, our penalty can respond to even a small amount of random noise in the kernel. Our regularization reduces the noise and efficiently enhances the success rate of large kernel sizes. We also compare our method to state-of-art approaches and test it using real-world images.
READ FULL TEXT