Hidden Two-Stream Convolutional Networks for Action Recognition

04/02/2017
by   Yi Zhu, et al.
0

Analyzing videos of human actions involves understanding the temporal relationships among video frames. CNNs are the current state-of-the-art methods for action recognition in videos. However, the CNN architectures currently being used have difficulty in capturing these relationships. State-of-the-art action recognition approaches rely on traditional local optical flow estimation methods to pre-compute motion information for CNNs. Such a two-stage approach is computationally expensive, storage demanding, and not end-to-end trainable. In this paper, we present a novel CNN architecture that implicitly captures motion information between adjacent frames. We then plug it into a state-of-the-art action recognition framework called two-stream CNNs. We name our approach hidden two-stream CNNs because it only takes raw video frames as input and directly predicts action classes without explicitly computing optical flow. Our end-to-end approach is 10x faster than a two-stage one and maintains similar accuracy. Experimental results on UCF101 and HMDB51 datasets show that our approach significantly outperforms previous best real-time approaches.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset