Revisiting L1 Loss in Super-Resolution: A Probabilistic View and Beyond
Super-resolution as an ill-posed problem has many high-resolution candidates for a low-resolution input. However, the popular ℓ_1 loss used to best fit the given HR image fails to consider this fundamental property of non-uniqueness in image restoration. In this work, we fix the missing piece in ℓ_1 loss by formulating super-resolution with neural networks as a probabilistic model. It shows that ℓ_1 loss is equivalent to a degraded likelihood function that removes the randomness from the learning process. By introducing a data-adaptive random variable, we present a new objective function that aims at minimizing the expectation of the reconstruction error over all plausible solutions. The experimental results show consistent improvements on mainstream architectures, with no extra parameter or computing cost at inference time.
READ FULL TEXT