MOPT: Multi-Object Panoptic Tracking
Comprehensive understanding of dynamic scenes is a critical prerequisite for intelligent robots to autonomously operate in our environment. Research in this domain which encompasses diverse perception problems has primarily been limited to addressing specific tasks individually and thus has contributed very little towards modeling the ability to understand dynamic scenes holistically. As a step towards encouraging research in this direction, we introduce a new perception task that we name Multi-Object Panoptic Tracking (MOPT). MOPT unifies the conventionally disjoint tasks of semantic segmentation, instance segmentation, and multi-object tracking. MOPT allows for exploiting pixel-level semantic information of 'thing' and 'stuff' classes, temporal coherence, and pixel-level associations over time, for the mutual benefit of each of these sub-problems. In order to facilitate quantitative evaluations of MOPT in a unified manner, we propose the soft Panoptic Tracking Quality (sPTQ) metric. As a first step towards addressing this task, we propose the novel PanopticTrackNet architecture that builds upon the state-of-the-art top-down panoptic segmentation network EfficientPS by adding a new tracking head to simultaneously learn all subtasks in an end-to-end manner. Additionally, we present several strong baselines that combine predictions from state-of-the-art panoptic segmentation and multi-object tracking models for comparison. We present extensive quantitative and qualitative evaluations for both vision-based and LiDAR-based MOPT on the challenging Virtual KITTI 2 and SemanticKITTI datasets, which demonstrates encouraging results.
READ FULL TEXT