LCS-TF: Multi-Agent Deep Reinforcement Learning-Based Intelligent Lane-Change System for Improving Traffic Flow
Discretionary lane-change is one of the critical challenges for autonomous vehicle (AV) design due to its significant impact on traffic efficiency. Existing intelligent lane-change solutions have primarily focused on optimizing the performance of the ego-vehicle, thereby suffering from limited generalization performance. Recent research has seen an increased interest in multi-agent reinforcement learning (MARL)-based approaches to address the limitation of the ego vehicle-based solutions through close coordination of multiple agents. Although MARL-based approaches have shown promising results, the potential impact of lane-change decisions on the overall traffic flow of a road segment has not been fully considered. In this paper, we present a novel hybrid MARL-based intelligent lane-change system for AVs designed to jointly optimize the local performance for the ego vehicle, along with the global performance focused on the overall traffic flow of a given road segment. With a careful review of the relevant transportation literature, a novel state space is designed to integrate both the critical local traffic information pertaining to the surrounding vehicles of the ego vehicle, as well as the global traffic information obtained from a road-side unit (RSU) responsible for managing a road segment. We create a reward function to ensure that the agents make effective lane-change decisions by considering the performance of the ego vehicle and the overall improvement of traffic flow. A multi-agent deep Q-network (DQN) algorithm is designed to determine the optimal policy for each agent to effectively cooperate in performing lane-change maneuvers. LCS-TF's performance was evaluated through extensive simulations in comparison with state-of-the-art MARL models. In all aspects of traffic efficiency, driving safety, and driver comfort, the results indicate that LCS-TF exhibits superior performance.
READ FULL TEXT