A Fully Bayesian Infinite Generative Model for Dynamic Texture Segmentation
Generative dynamic texture models (GDTMs) are widely used for dynamic texture (DT) segmentation in the video sequences. GDTMs represent DTs as a set of linear dynamical systems (LDSs). A major limitation of these models concerns the automatic selection of a proper number of DTs. Dirichlet process mixture (DPM) models which have appeared recently as the cornerstone of the non-parametric Bayesian statistics, is an optimistic candidate toward resolving this issue. Under this motivation to resolve the aforementioned drawback, we propose a novel non-parametric fully Bayesian approach for DT segmentation, formulated on the basis of a joint DPM and GDTM construction. This interaction causes the algorithm to overcome the problem of automatic segmentation properly. We derive the Variational Bayesian Expectation-Maximization (VBEM) inference for the proposed model. Moreover, in the E-step of inference, we apply Rauch-Tung-Striebel smoother (RTSS) algorithm on Variational Bayesian LDSs. Ultimately, experiments on different video sequences are performed. Experiment results indicate that the proposed algorithm outperforms the previous methods in efficiency and accuracy noticeably.
READ FULL TEXT