Spatial-Spectral Fusion by Combining Deep Learning and Variation Model
In the field of spatial-spectral fusion, the model-based method and the deep learning (DL)-based method are state-of-the-art. This paper presents a fusion method that incorporates the deep neural network into the model-based method for the most common case in the spatial-spectral fusion: PAN/multispectral (MS) fusion. Specifically, we first map the gradient of the high spatial resolution panchromatic image (HR-PAN) and the low spatial resolution multispectral image (LR-MS) to the gradient of the high spatial resolution multispectral image (HR-MS) via a deep residual convolutional neural network (CNN). Then we construct a fusion framework by the LR-MS image, the gradient prior learned from the gradient network, and the ideal fused image. Finally, an iterative optimization algorithm is used to solve the fusion model. Both quantitative and visual assessments on high-quality images from various sources demonstrate that the proposed fusion method is superior to all the mainstream algorithms included in the comparison in terms of overall fusion accuracy.
READ FULL TEXT