Deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution

01/12/2018
by   Jonas Rothfuss, et al.
0

We present a novel deep neural network architecture for representing robot experiences in an episodic-like memory which facilitates encoding, recalling, and predicting action experiences. Our proposed unsupervised deep episodic memory model 1) encodes observed actions in a latent vector space and, based on this latent encoding, 2) infers action categories, 3) reconstructs original frames, and 4) predicts future frames. We evaluate the proposed model on two different large-scale action datasets. Results show that conceptually similar actions are mapped into the same region of the latent vector space. Results show that conceptual similarity of videos is reflected by the proximity of their vector representations in the latent space.Based on this contribution, we introduce an action matching and retrieval mechanism and evaluate its performance and generalization capability on a real humanoid robot in an action execution scenario.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset