Blending Primitive Policies in Shared Control for Assisted Teleoperation
Movement primitives have the property to accommodate changes in the robot state while maintaining attraction to the original policy. As such, we investigate the use of primitives as a blending mechanism by considering that state deviations from the original policy are caused by user inputs. As the primitive recovers from the user input, it implicitly blends human and robot policies without requiring their weightings – referred to as arbitration. In this paper, we adopt Dynamical Movement Primitives (DMPs), which allow us to avoid the need for multiple demonstrations, and are fast enough to enable numerous instantiations, one for each hypothesis of the human intent. User studies are presented on assisted teleoperation tasks of reaching multiple goals and dynamic obstacle avoidance. Comparable performance to conventional teleoperation was achieved while significantly decreasing human intervention, often by more than 60
READ FULL TEXT