Automated acquisition of structured, semantic models of manipulation activities from human VR demonstration

11/27/2020
by   Andrei Haidu, et al.
0

In this paper we present a system capable of collecting and annotating, human performed, robot understandable, everyday activities from virtual environments. The human movements are mapped in the simulated world using off-the-shelf virtual reality devices with full body, and eye tracking capabilities. All the interactions in the virtual world are physically simulated, thus movements and their effects are closely relatable to the real world. During the activity execution, a subsymbolic data logger is recording the environment and the human gaze on a per-frame basis, enabling offline scene reproduction and replays. Coupled with the physics engine, online monitors (symbolic data loggers) are parsing (using various grammars) and recording events, actions, and their effects in the simulated world.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset