Modeling Human Driving Behavior in Highway Scenario using Inverse Reinforcement Learning

10/07/2020
by   Zhiyu Huang, et al.
0

Human driving behavior modeling is of great importance for designing safe, smart, smooth as well as personalized autonomous driving systems. In this paper, an internal reward function-based driving model that emulates the human's internal decision-making mechanism is proposed. Besides, a sampling-based inverse reinforcement learning (IRL) algorithm that learns the reward function from human naturalistic driving data is also developed. A polynomial trajectory sampler is adopted to generate feasible trajectories and approximate the partition function in the maximum entropy IRL framework, and a dynamic and interactive environment is built upon the static driving dataset to estimate the generated trajectories considering the mutual dependency of agents' actions. The proposed method is applied to learn personalized reward functions for individual human drivers from the NGSIM dataset. The qualitative results demonstrate that the learned reward function is able to interpret their decisions. The quantitative results also reveal that the personalized modeling method significantly outperforms the general modeling approach, reducing the errors in human likeness by 24 delivers better results compared to other baseline methods. Moreover, it is found that estimating the response actions of surrounding vehicles plays an integral role in estimating the trajectory accurately and achieving a better generalization ability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset