Non-Uniform Time-Step Deep Q-Network for Carrier-Sense Multiple Access in Heterogeneous Wireless Networks
This paper investigates a new class of carrier-sense multiple access (CSMA) protocols that employ deep reinforcement learning (DRL) techniques, referred to as carrier-sense deep-reinforcement learning multiple access (CS-DLMA). The goal of CS-DLMA is to enable efficient and equitable spectrum sharing among a group of co-located heterogeneous wireless networks. Existing CSMA protocols, such as the medium access control (MAC) of WiFi, are designed for a homogeneous network in which all nodes adopt the same protocol. Such protocols suffer from severe performance degradation in a heterogeneous environment where there are nodes adopting other MAC protocols. CS-DLMA aims to circumvent this problem by making use of DRL. In particular, this paper adopts alpha-fairness as the general objective of CS-DLMA. With alpha-fairness, CS-DLMA can achieve a range of different objectives when coexisting with other MACs by changing the value of alpha. A salient feature of CS-DLMA is that it can achieve these objectives without knowing the coexisting MACs through a learning process based on DRL. The underpinning DRL technique in CS-DLMA is deep Q-network (DQN). However, the conventional DQN algorithms are not suitable for CS-DLMA due to their uniform time-step assumption. In CSMA protocols, time steps are non-uniform in that the time duration required for carrier sensing is smaller than the duration of data transmission. This paper introduces a non-uniform time-step formulation of DQN to address this issue. Our simulation results show that CS-DLMA can achieve the general alpha-fairness objective when coexisting with TDMA, ALOHA, and WiFi protocols by adjusting its own transmission strategy. Interestingly, we also find that CS-DLMA is more Pareto efficient than other CSMA protocols when coexisting with WiFi.
READ FULL TEXT