Relaxed Transformer Decoders for Direct Action Proposal Generation

02/03/2021
by   Jiaqi Tang, et al.
0

Temporal action proposal generation is an important and challenging task in video understanding, which aims at detecting all temporal segments containing action instances of interest. The existing proposal generation approaches are generally based on pre-defined anchor windows or heuristic bottom-up boundary matching strategies. This paper presents a simple and end-to-end learnable framework (RTD-Net) for direct action proposal generation, by re-purposing a Transformer-alike architecture. To tackle the essential visual difference between time and space, we make three important improvements over the original transformer detection framework (DETR). First, to deal with slowness prior in videos, we replace the original Transformer encoder with a boundary attentive module to better capture temporal information. Second, due to the ambiguous temporal boundary and relatively sparse annotations, we present a relaxed matching loss to relieve the strict criteria of single assignment to each groundtruth. Finally, we devise a three-branch head to further improve the proposal confidence estimation by explicitly predicting its completeness. Extensive experiments on THUMOS14 and ActivityNet-1.3 benchmarks demonstrate the effectiveness of RTD-Net, on both tasks of temporal action proposal generation and temporal action detection. Moreover, due to its simplicity in design, our RTD-Net is more efficient than previous proposal generation methods without non-maximum suppression post-processing. The code will be available at <https://github.com/MCG-NJU/RTD-Action>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset