Temporal Attribute-Appearance Learning Network for Video-based Person Re-Identification

09/09/2020
by   Jiawei Liu, et al.
2

Video-based person re-identification aims to match a specific pedestrian in surveillance videos across different time and locations. Human attributes and appearance are complementary to each other, both of them contribute to pedestrian matching. In this work, we propose a novel Temporal Attribute-Appearance Learning Network (TALNet) for video-based person re-identification. TALNet simultaneously exploits human attributes and appearance to learn comprehensive and effective pedestrian representations from videos. It explores hard visual attention and temporal-semantic context for attributes, and spatial-temporal dependencies among body parts for appearance, to boost the learning of them. Specifically, an attribute branch network is proposed with a spatial attention block and a temporal-semantic context block for learning robust attribute representation. The spatial attention block focuses the network on corresponding regions within video frames related to each attribute, the temporal-semantic context block learns both the temporal context for each attribute across video frames and the semantic context among attributes in each video frame. The appearance branch network is designed to learn effective appearance representation from both whole body and body parts with spatial-temporal dependencies among them. TALNet leverages the complementation between attribute and appearance representations, and jointly optimizes them by multi-task learning fashion. Moreover, we annotate ID-level attributes for each pedestrian in the two commonly used video datasets. Extensive experiments on these datasets, have verified the superiority of TALNet over state-of-the-art methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset