A Correlation Based Feature Representation for First-Person Activity Recognition
In this paper, a simple yet efficient feature encoding for first-person video is introduced. The proposed method is appropriate for representation of high-dimensional features such as those extracted from convolutional neural networks (CNNs). The per-frame extracted features are considered as a set of time series, and inter and intra-time series relations are employed to represent the video descriptors. To find the inter-time relations, the series are grouped and the linear correlation between each pair of groups is calculated. The relations between them can represent the scene dynamics and local motions. The introduced grouping strategy helps to considerably reduce the computational cost. Furthermore, we split the series in temporal direction in order to better focus on each local time window. In order to extract the cyclic motion patterns, which can be considered as primary components of various activities, intra-time series correlations are exploited. The representation method results in highly discriminative features which can be simply classified by a linear SVM. The experiments show that our method outperforms the previous encoding methods, such as bag of visual word (BoVW), improved Fisher vector (IFV), Fourier temporal pyramid (FTP), and recently proposed pooled time series (PoT) on three public first-person datasets. The experimental results also confirm that the proposed method has a superior performance over the state-of-the-art methods on recognizing first-person activities
READ FULL TEXT