Frame-To-Frame Consistent Semantic Segmentation
In this work, we aim for temporally consistent semantic segmentation throughout frames in a video. Many semantic segmentation algorithms process images individually which leads to an inconsistent scene interpretation due to illumination changes, occlusions and other variations over time. To achieve a temporally consistent prediction, we train a convolutional neural network (CNN) which propagates features through consecutive frames in a video using a convolutional long short term memory (ConvLSTM) cell. Besides the temporal feature propagation, we penalize inconsistencies in our loss function. We show in our experiments that the performance improves when utilizing video information compared to single frame prediction. The mean intersection over union (mIoU) metric on the Cityscapes validation set increases from 45.2 the single frames to 57.9 propagate features trough time on the ESPNet. Most importantly, inconsistency decreases from 4.5 indicate that the added temporal information produces a frame-to-frame consistent and more accurate image understanding compared to single frame processing.
READ FULL TEXT