Explicitly Modeling the Discriminability for Instance-Aware Visual Object Tracking
Visual object tracking performance has been dramatically improved in recent years, but some severe challenges remain open, like distractors and occlusions. We suspect the reason is that the feature representations of the tracking targets are only expressively learned but not fully discriminatively modeled. In this paper, we propose a novel Instance-Aware Tracker (IAT) to explicitly excavate the discriminability of feature representations, which improves the classical visual tracking pipeline with an instance-level classifier. First, we introduce a contrastive learning mechanism to formulate the classification task, ensuring that every training sample could be uniquely modeled and be highly distinguishable from plenty of other samples. Besides, we design an effective negative sample selection scheme to contain various intra and inter classes in the instance classification branch. Furthermore, we implement two variants of the proposed IAT, including a video-level one and an object-level one. They realize the concept of instance in different granularity as videos and target bounding boxes, respectively. The former enhances the ability to recognize the target from the background while the latter boosts the discriminative power for mitigating the target-distractor dilemma. Extensive experimental evaluations on 8 benchmark datasets show that both two versions of the proposed IAT achieve leading results against state-of-the-art methods while running at 30FPS. Code will be available when it is published.
READ FULL TEXT