Action Anticipation: Reading the Intentions of Humans and Robots
Humans have the fascinating capacity to understand, and anticipate the actions of other humans in the same space, without requiring verbal communication. This "intention reading" capacity is underpinned by a common motor-repertoire that is shared by all humans, and afforded by a subtle coordination of eye-head-arm movements that encodes the cues and signals that will ultimately be deciphered by another human. In this paper we study the action anticipation capacity of humans and robots with a focus on three steps: (i) conducting human interaction studies to record a set of relevant motion signals and cues, (ii) use these data to build computational motor control models, and (iii) incorporate this model in a robot controller. In the human studies we ask participants to guess what action the actor was performing: giving or placing an object on a table. Our results reveal that giving actions present a more complex behavior of the human gaze compared to the placing actions. These findings are integrated in our motor control together with the arm movement modeled from human behavior. Gaussian Mixture models the human arm movement, and then Gaussian Mixture Regression generates the controller. The readability of the controller is tested on a human-robot scenario validating the results acquired from the human experiment. Our work is a step forward to building robotic systems that are not only capable of reading and anticipating the actions of human collaborators but also, at the same time, to act in a way that is legible to their human counterparts.
READ FULL TEXT