Vehicular Cooperative Perception Through Action Branching and Federated Reinforcement Learning
Cooperative perception plays a vital role in extending a vehicle's sensing range beyond its line-of-sight. However, exchanging raw sensory data under limited communication resources is infeasible. Towards enabling an efficient cooperative perception, vehicles need to address the following fundamental question: What sensory data needs to be shared?, at which resolution?, and with which vehicles? To answer this question, in this paper, a novel framework is proposed to allow reinforcement learning (RL)-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs) by utilizing a quadtree-based point cloud compression mechanism. Furthermore, a federated RL approach is introduced in order to speed up the training process across vehicles. Simulation results show the ability of the RL agents to efficiently learn the vehicles' association, RB allocation, and message content selection while maximizing vehicles' satisfaction in terms of the received sensory information. The results also show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
READ FULL TEXT