LSTM stack-based Neural Multi-sequence Alignment TeCHnique (NeuMATCH)

02/19/2018
by   Pelin Dogan, et al.
0

The alignment of heterogeneous sequential data (video to text) is an important and challenging problem. Standard techniques for such alignment, including Dynamic Time Warping (DTW) and Conditional Random Fields (CRFs), suffer from inherent drawbacks. Mainly, the Markov assumption implies that, given the immediate past, future alignment decisions are independent of further history. The separation between similarity computation and alignment decision also prevents end-to-end training. In this paper, we propose an end-to-end neural architecture where alignment actions are implemented as moving data between stacks of Long Short-term Memory (LSTM) blocks. This flexible architecture supports a large variety of alignment tasks, including one-to-one, one-to-many, skipping unmatched elements, and (with extensions) non-monotonic alignment. Extensive experiments on synthetic and real datasets show that our algorithm outperforms state-of-the-art baselines.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset