Automatic Video Object Segmentation via Motion-Appearance-Stream Fusion and Instance-aware Segmentation

12/03/2019
by   Sungkwon Choo, et al.
22

This paper presents a method for automatic video object segmentation based on the fusion of motion stream, appearance stream, and instance-aware segmentation. The proposed scheme consists of a two-stream fusion network and an instance segmentation network. The two-stream fusion network again consists of motion and appearance stream networks, which extract long-term temporal and spatial information, respectively. Unlike the existing two-stream fusion methods, the proposed fusion network blends the two streams at the original resolution for obtaining accurate segmentation boundary. We develop a recurrent bidirectional multiscale structure with skip connection for the stream fusion network to extract long-term temporal information. Also, the multiscale structure enables to obtain the original resolution features at the end of the network. As a result of two-stream fusion, we have a pixel-level probabilistic segmentation map, which has higher values at the pixels belonging to the foreground object. By combining the probability of foreground map and objectness score of instance segmentation mask, we finally obtain foreground segmentation results for video sequences without any user intervention, i.e., we achieve successful automatic video segmentation. The proposed structure shows a state-of-the-art performance for automatic video object segmentation task, and also achieves near semi-supervised performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset