Edge-Preserving Guided Semantic Segmentation for VIPriors Challenge

07/17/2020
by   Chih-Chung Hsu, et al.
0

Semantic segmentation is one of the most attractive research fields in computer vision. In the VIPriors challenge, only very limited numbers of training samples are allowed, leading to that the current state-of-the-art and deep learning-based semantic segmentation techniques are hard to train well. To overcome this shortcoming, therefore, we propose edge-preserving guidance to obtain the extra prior information, to avoid the overfitting under small-scale training dataset. First, a two-channeled convolutional layer is concatenated to the last layer of the conventional semantic segmentation network. Then, an edge map is calculated from the ground truth by Sobel operation and followed by concatenating a hard-thresholding operation to indicate whether the pixel is the edge or not. Then, the two-dimensional cross-entropy loss is adopted to calculate the loss between the predicted edge map and its ground truth, termed as an edge-preserving loss. In this way, the continuity of boundaries between different instances can be forced by the proposed edge-preserving loss. Experiments demonstrate that the proposed method can achieve excellent performance under small-scale training set, compared to state-of-the-art semantic segmentation techniques.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset