Human Apprenticeship Learning via Kernel-based Inverse Reinforcement Learning

02/25/2020
by   Mark A. Rucker, et al.
0

This paper considers if a reward function learned via inverse reinforcement from a human expert can be used as a feedback intervention to alter future human performance as desired (i.e., human to human apprenticeship learning). To learn reward functions two new algorithms are developed: a kernel-based inverse reinforcement learning algorithm and a Monte Carlo reinforcement learning algorithm. The algorithms are benchmarked against well-known alternatives within their respective corpus and are shown to outperform in terms of efficiency and optimality. To test the feedback intervention two randomized experiments are performed with 3,256 human participants. The experimental results demonstrate with significance that the rewards learned from "expert" individuals are effective as feedback interventions. In addition to the algorithmic contributions and successful experiments, the paper also describes three reward function modifications to improve reward function feedback interventions for humans.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset