Dynamic Graph Modules for Modeling Higher-Order Interactions in Activity Recognition
Video action recognition, as a critical problem towards video understanding, has attracted increasing attention recently. To identify an action involving higher-order object interactions, we need to consider: 1) spatial relations among objects in a single frame; 2) temporal relations between different/same objects across multiple frames. However, previous approaches, e.g., 2D ConvNet + LSTM or 3D ConvNet, are either incapable of capturing relations between objects, or unable to handle streaming videos. In this paper, we propose a novel dynamic graph module to model object interactions in videos. We also devise two instantiations of our graph module: (i) visual graph, to capture visual similarity changes between objects; (ii) location graph, to capture relative location changes between objects. Distinct from previous models, the proposed graph module has the ability to process streaming videos in an aggressive manner. Combined with existing 3D action recognition ConvNets, our graph module can also boost ConvNets' performance, which demonstrates the flexibility of the module. We test our graph module on Something-Something dataset and achieve the state-of-the-art performance.
READ FULL TEXT