Multi-Agent Reinforcement Learning for Distributed Joint Communication and Computing Resource Allocation over Cell-Free Massive MIMO-enabled Mobile Edge Computing Network

12/04/2021
by   Fitsum Debebe Tilahun, et al.
0

To support the newly introduced multimedia services with ultra-low latency and extensive computation requirements, resource-constrained end user devices should utilize the ubiquitous computing resources available at network edge for augmenting on-board (local) processing with edge computing. In this regard, the capability of cell-free massive MIMO to provide reliable access links by guaranteeing uniform quality of service without cell edge can be exploited for seamless parallel processing. Taking this into account, we consider a cell-free massive MIMO-enabled mobile edge network to meet the stringent requirements of the advanced services. For the considered mobile edge network, we formulate a joint communication and computing resource allocation (JCCRA) problem with the objective of minimizing energy consumption of the users while meeting the tight delay constraints. We then propose a fully distributed cooperative solution approach based on multiagent deep deterministic policy gradient (MADDPG) algorithm. The simulation results demonstrate that the performance of the proposed distributed approach has converged to that of a centralized deep deterministic policy gradient (DDPG)-based target benchmark, while alleviating the large overhead associated with the latter. Furthermore, it has been shown that our approach significantly outperforms heuristic baselines in terms of energy efficiency, roughly up to 5 times less total energy consumption.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset