A Hysteretic Q-learning Coordination Framework for Emerging Mobility Systems in Smart Cities

11/05/2020
by   Behdad Chalaki, et al.
0

Connected and automated vehicles (CAVs) can alleviate traffic congestion, air pollution, and improve safety. In this paper, we provide a decentralized coordination framework for CAVs at a signal-free intersection to minimize travel time and improve fuel efficiency. We employ a simple yet powerful reinforcement learning approach, an off-policy temporal difference learning called Q-learning, enhanced with a coordination mechanism to address this problem. Then, we integrate a first-in-first-out queuing policy to improve the performance of our system. We demonstrate the efficacy of our proposed approach through simulation and comparison with the classical optimal control method based on Pontryagin's minimum principle.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset