Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution
Video super-resolution has become one of the most critical problems in video processing. In the deep learning literature, recent works have shown the benefits of using perceptual losses to improve the performance on various image restoration tasks; however, these have yet to be applied for video super-resolution. In this work, we present the use of a very deep residual neural network, VSRResNet, for performing high-quality video super-resolution. We show that VSRResNet surpasses the current state-of-the-art VSR model, when compared with the PSNR/SSIM metric across most scale factors. Furthermore, we train this architecture with a convex combination of adversarial, feature-space and pixel-space loss to obtain the VSRResFeatGAN model. Finally, we compare the resulting VSR model with current state-of-the-art models using the PSNR, SSIM, and a novel perceptual distance metric, the PercepDist metric. With this latter metric, we show that the VSRResFeatGAN outperforms current state-of-the-art SR models, both quantitatively and qualitatively.
READ FULL TEXT