Data Represention for Deep Learning with Priori Knowledge of Symmetric Wireless Tasks
Deep neural networks (DNNs) have been applied to address various wireless tasks, which usually need a large number of samples for training. Considering that wireless networks are highly dynamic and gathering data is expensive, it is paramount to reduce the sample complexity. Incorporating domain knowledge into learning is a promising way of decreasing training samples. Yet how to invoke priori knowledge of wireless tasks for efficient data representation remains largely unexplored. In this article, we first briefly summarize several approaches to address training complexity. Then, we show that a symmetric property, permutation equivariance, widely exists in wireless tasks. We introduce a simple method to compress the training set by exploiting such a generic prior, which is to jointly sort the input and output of the DNNs. We use interference coordination and caching policy optimization to illustrate how to apply this method of data representation, i.e., ranking, and how much the sample complexity can be reduced. Simulation results demonstrate that the training samples required to achieve the same performance as the traditional data representation can be reduced by 10 ∼ 200 folds by ranking.
READ FULL TEXT